Skip to content

Releases: IntrinsicLabsAI/intrinsic-model-server

0.17.0

11 Jan 21:35
e9b550e
Compare
Choose a tag to compare
Bump llama-cpp-python from 0.2.20 to 0.2.27 (#293)

Bumps [llama-cpp-python](https://github.com/abetlen/llama-cpp-python)
from 0.2.20 to 0.2.27.
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/abetlen/llama-cpp-python/blob/main/CHANGELOG.md">llama-cpp-python's
changelog</a>.</em></p>
<blockquote>
<h2>[0.2.27]</h2>
<ul>
<li>feat: Update llama.cpp to
ggerganov/llama.cpp@b3a7c20b5c035250257d2b62851c379b159c899a</li>
<li>feat: Add <code>saiga</code> chat format by <a
href="https://github.com/femoiseev"><code>@​femoiseev</code></a> in <a
href="https://redirect.github.com/abetlen/llama-cpp-python/issues/1050">#1050</a></li>
<li>feat: Added <code>chatglm3</code> chat format by <a
href="https://github.com/xaviviro"><code>@​xaviviro</code></a> in <a
href="https://redirect.github.com/abetlen/llama-cpp-python/issues/1059">#1059</a></li>
<li>fix: Correct typo in README.md by <a
href="https://github.com/qeleb"><code>@​qeleb</code></a> in (<a
href="https://redirect.github.com/abetlen/llama-cpp-python/issues/1058">#1058</a>)</li>
</ul>
<h2>[0.2.26]</h2>
<ul>
<li>feat: Update llama.cpp to
ggerganov/llama.cpp@f6793491b5af6da75edad34d6f503ef86d31b09f</li>
</ul>
<h2>[0.2.25]</h2>
<ul>
<li>feat(server): Multi model support by <a
href="https://github.com/D4ve-R"><code>@​D4ve-R</code></a> in <a
href="https://redirect.github.com/abetlen/llama-cpp-python/issues/931">#931</a></li>
<li>feat(server): Support none defaulting to infinity for completions by
<a href="https://github.com/swg"><code>@​swg</code></a> in <a
href="https://redirect.github.com/abetlen/llama-cpp-python/issues/111">#111</a></li>
<li>feat(server): Implement openai api compatible authentication by <a
href="https://github.com/docmeth2"><code>@​docmeth2</code></a> in <a
href="https://redirect.github.com/abetlen/llama-cpp-python/issues/1010">#1010</a></li>
<li>fix: text_offset of multi-token characters by <a
href="https://github.com/twaka"><code>@​twaka</code></a> in <a
href="https://redirect.github.com/abetlen/llama-cpp-python/issues/1037">#1037</a></li>
<li>fix: ctypes bindings for kv override by <a
href="https://github.com/phiharri"><code>@​phiharri</code></a> in <a
href="https://redirect.github.com/abetlen/llama-cpp-python/issues/1011">#1011</a></li>
<li>fix: ctypes definitions of llama_kv_cache_view_update and
llama_kv_cache_view_free. by <a
href="https://github.com/e-c-d"><code>@​e-c-d</code></a> in <a
href="https://redirect.github.com/abetlen/llama-cpp-python/issues/1028">#1028</a></li>
</ul>
<h2>[0.2.24]</h2>
<ul>
<li>feat: Update llama.cpp to
ggerganov/llama.cpp@0e18b2e7d0b5c0a509ea40098def234b8d4a938a</li>
<li>feat: Add offload_kqv option to llama and server by <a
href="https://github.com/abetlen"><code>@​abetlen</code></a> in
095c65000642a3cf73055d7428232fb18b73c6f3</li>
<li>feat: n_ctx=0 now uses the n_ctx_train of the model by <a
href="https://github.com/DanieleMorotti"><code>@​DanieleMorotti</code></a>
in <a
href="https://redirect.github.com/abetlen/llama-cpp-python/issues/1015">#1015</a></li>
<li>feat: logits_to_logprobs supports both 2-D and 3-D logits arrays by
<a href="https://github.com/kddubey"><code>@​kddubey</code></a> in <a
href="https://redirect.github.com/abetlen/llama-cpp-python/issues/1002">#1002</a></li>
<li>fix: Remove f16_kv, add offload_kqv fields in low level and llama
apis by <a
href="https://github.com/brandonrobertz"><code>@​brandonrobertz</code></a>
in <a
href="https://redirect.github.com/abetlen/llama-cpp-python/issues/1019">#1019</a></li>
<li>perf: Don't convert logprobs arrays to lists by <a
href="https://github.com/kddubey"><code>@​kddubey</code></a> in <a
href="https://redirect.github.com/abetlen/llama-cpp-python/issues/1021">#1021</a></li>
<li>docs: Fix README.md functionary demo typo by <a
href="https://github.com/evelynmitchell"><code>@​evelynmitchell</code></a>
in <a
href="https://redirect.github.com/abetlen/llama-cpp-python/issues/996">#996</a></li>
<li>examples: Update low_level_api_llama_cpp.py to match current API by
<a href="https://github.com/jsoma"><code>@​jsoma</code></a> in <a
href="https://redirect.github.com/abetlen/llama-cpp-python/issues/1023">#1023</a></li>
</ul>
<h2>[0.2.23]</h2>
<ul>
<li>Update llama.cpp to
ggerganov/llama.cpp@948ff137ec37f1ec74c02905917fa0afc9b97514</li>
<li>Add qwen chat format by <a
href="https://github.com/yhfgyyf"><code>@​yhfgyyf</code></a> in <a
href="https://redirect.github.com/abetlen/llama-cpp-python/issues/1005">#1005</a></li>
<li>Add support for running the server with SSL by <a
href="https://github.com/rgerganov"><code>@​rgerganov</code></a> in <a
href="https://redirect.github.com/abetlen/llama-cpp-python/issues/994">#994</a></li>
<li>Replace logits_to_logprobs implementation with numpy equivalent to
llama.cpp by <a
href="https://github.com/player1537"><code>@​player1537</code></a> in <a
href="https://redirect.github.com/abetlen/llama-cpp-python/issues/991">#991</a></li>
<li>Fix UnsupportedOperation: fileno in suppress_stdout_stderr by <a
href="https://github.com/zocainViken"><code>@​zocainViken</code></a> in
<a
href="https://redirect.github.com/abetlen/llama-cpp-python/issues/961">#961</a></li>
<li>Add Pygmalion chat format by <a
href="https://github.com/chiensen"><code>@​chiensen</code></a> in <a
href="https://redirect.github.com/abetlen/llama-cpp-python/issues/986">#986</a></li>
<li>README.md multimodal params fix by <a
href="https://github.com/zocainViken"><code>@​zocainViken</code></a> in
<a
href="https://redirect.github.com/abetlen/llama-cpp-python/issues/967">#967</a></li>
<li>Fix minor typo in README by <a
href="https://github.com/aniketmaurya"><code>@​aniketmaurya</code></a>
in <a
href="https://redirect.github.com/abetlen/llama-cpp-python/issues/958">#958</a></li>
</ul>
<h2>[0.2.22]</h2>
<ul>
<li>Update llama.cpp to
ggerganov/llama.cpp@8a7b2fa528f130631a5f43648481596ab320ed5a</li>
<li>Fix conflict with transformers library by kddubey in <a
href="https://redirect.github.com/abetlen/llama-cpp-python/issues/952">#952</a></li>
</ul>
<h2>[0.2.21]</h2>
<ul>
<li>Update llama.cpp to
ggerganov/llama.cpp@64e64aa2557d97490b2fe1262b313e2f4a1607e3</li>
</ul>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="https://github.com/abetlen/llama-cpp-python/commit/75d0527fd782a792af8612e55b0a3f2dad469ae9"><code>75d0527</code></a>
Bump version</li>
<li><a
href="https://github.com/abetlen/llama-cpp-python/commit/fffcd0181c2b58a084daebc6df659520d0c73337"><code>fffcd01</code></a>
Update llama.cpp</li>
<li><a
href="https://github.com/abetlen/llama-cpp-python/commit/907b9e9d4281336072519fbf11e885768ad0ff0b"><code>907b9e9</code></a>
Add Saiga chat format. (<a
href="https://redirect.github.com/abetlen/llama-cpp-python/issues/1050">#1050</a>)</li>
<li><a
href="https://github.com/abetlen/llama-cpp-python/commit/f766b70c9a63801f6f27dc92b4ab822f92055bc9"><code>f766b70</code></a>
Fix: Correct typo in README.md (<a
href="https://redirect.github.com/abetlen/llama-cpp-python/issues/1058">#1058</a>)</li>
<li><a
href="https://github.com/abetlen/llama-cpp-python/commit/cf743ec5d32cc84e68295da8442ccf3a64e635f1"><code>cf743ec</code></a>
Added ChatGLM chat format (<a
href="https://redirect.github.com/abetlen/llama-cpp-python/issues/1059">#1059</a>)</li>
<li><a
href="https://github.com/abetlen/llama-cpp-python/commit/eb9c7d4ed8984bdff6585e38d04e7d17bf14155e"><code>eb9c7d4</code></a>
Update llama.cpp</li>
<li><a
href="https://github.com/abetlen/llama-cpp-python/commit/011c3630f5a130505458c29d58f1654d5efba3bf"><code>011c363</code></a>
Bump version</li>
<li><a
href="https://github.com/abetlen/llama-cpp-python/commit/969ea6a2c029964175316dd71e4497f241fcc6a4"><code>969ea6a</code></a>
Update llama.cpp</li>
<li><a
href="https://github.com/abetlen/llama-cpp-python/commit/f952d45c2cd0ccb63b117130c1b1bf4897987e4c"><code>f952d45</code></a>
Update llama.cpp</li>
<li><a
href="https://github.com/abetlen/llama-cpp-python/commit/f6f157c06dac24296ec990e912f80c4f8dbe1591"><code>f6f157c</code></a>
Update bug report instructions for new build process.</li>
<li>Additional commits viewable in <a
href="https://github.com/abetlen/llama-cpp-python/compare/v0.2.20...v0.2.27">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=llama-cpp-python&package-manager=pip&previous-version=0.2.20&new-version=0.2.27)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)


</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>

0.16.0

17 Nov 04:39
dfd6133
Compare
Choose a tag to compare
disable autorelease

0.15.0

17 Nov 04:32
938404b
Compare
Choose a tag to compare

Autorelease 0.15.0.

0.14.0

17 Nov 04:30
5166b47
Compare
Choose a tag to compare

Autorelease 0.14.0.

0.13.0

15 Nov 22:04
a3eaaa3
Compare
Choose a tag to compare
try more

0.12.0

15 Nov 21:45
d96adef
Compare
Choose a tag to compare
build separate images for server/worker (#227)

0.11.0

15 Nov 21:06
a72fede
Compare
Choose a tag to compare
GRPC things, mTLS for workers (#226)

0.10.0

14 Nov 21:08
50ddc4b
Compare
Choose a tag to compare
fix dockerfile entrypoint

0.9.0

14 Nov 20:28
42a03a1
Compare
Choose a tag to compare
bump python min version, latest poetry again

0.8.0

14 Nov 19:38
4253e73
Compare
Choose a tag to compare
bump lock