Skip to content

LMDeploy is a toolkit for compressing, deploying, and serving LLMs.

License

Notifications You must be signed in to change notification settings

demin-song/lmdeploy

 
 

Repository files navigation

docs badge PyPI license issue resolution open issues

English | 简体中文

👋 join us on Twitter, Discord and WeChat


Latest News 🎉

  • [2023/12] Turbomind supports multimodal input. Gradio Demo
  • [2023/11] Turbomind supports loading hf model directly. Click here for details.
  • [2023/11] TurboMind major upgrades, including: Paged Attention, faster attention kernels without sequence length limitation, 2x faster KV8 kernels, Split-K decoding (Flash Decoding), and W4A16 inference for sm_75
  • [2023/09] TurboMind supports Qwen-14B
  • [2023/09] TurboMind supports InternLM-20B
  • [2023/09] TurboMind supports all features of Code Llama: code completion, infilling, chat / instruct, and python specialist. Click here for deployment guide
  • [2023/09] TurboMind supports Baichuan2-7B
  • [2023/08] TurboMind supports flash-attention2.
  • [2023/08] TurboMind supports Qwen-7B, dynamic NTK-RoPE scaling and dynamic logN scaling
  • [2023/08] TurboMind supports Windows (tp=1)
  • [2023/08] TurboMind supports 4-bit inference, 2.4x faster than FP16, the fastest open-source implementation🚀. Check this guide for detailed info
  • [2023/08] LMDeploy has launched on the HuggingFace Hub, providing ready-to-use 4-bit models.
  • [2023/08] LMDeploy supports 4-bit quantization using the AWQ algorithm.
  • [2023/07] TurboMind supports Llama-2 70B with GQA.
  • [2023/07] TurboMind supports Llama-2 7B/13B.
  • [2023/07] TurboMind supports tensor-parallel inference of InternLM.

Introduction

LMDeploy is a toolkit for compressing, deploying, and serving LLM, developed by the MMRazor and MMDeploy teams. It has the following core features:

  • Efficient Inference Engine (TurboMind): It develops key features like persistent batch(a.k.a. continuous batching), blocked KV cache, dynamic split&fuse, tensor parallelism, high-performance CUDA kernels and so on, ensuring the high throughput and low latency during LLMs inference.

  • Interactive Inference Mode: By caching the k/v of attention during multi-round dialogue processes, the engine remembers dialogue history, thus avoiding repetitive processing of historical sessions.

  • Quantization: LMDeploy supports various quantization methods and efficient inference of quantized models. The reliability of quantization has been verified on models of different scales.

Performance

The TurboMind engine achieves up to 1.36 ~ 1.85 times higher request throughput compared to vLLM across models of various size. In terms of static inference capabilities, the token throughput (out token/s) of TurboMind's 4bit model inference significantly outperforms FP16/BF16 inference, with an improvement of up to 2.4 times.

v0 1 0-benchmark

For inference benchmarks in more devices and more settings, please refer to the following link:

  • A100
  • 4090
  • 3090
  • 2080

Supported Models

LMDeploy has developed two inference engines - Pytorch and TurboMind, each with a different focus. The former strives for ultimate optimization of inference performance, while the latter, developed purely in Python, aims to decrease the barriers for developers.

As shown in the next tables, the inference engines differ in the types of supported models and the inference data type. Users can choose the one that best fits their actual needs.

TurboMind

Model Size FP16/BF16 KV INT8 W4A16
Llama 7B - 65B Yes Yes Yes
Llama2 7B - 70B Yes Yes Yes
InternLM 7B - 20B Yes Yes Yes
InternLM-XComposer 7B Yes Yes Yes
QWen 7B - 72B Yes Yes Yes
QWen-VL 7B Yes Yes Yes
Baichuan 7B Yes Yes Yes
Baichuan2 7B Yes Yes Yes
Code Llama 7B - 34B Yes No No

Pytorch

Model Size FP16/BF16 KV INT8 W8A8
Llama 7B - 65B Yes No Yes
Llama2 7B - 70B Yes No Yes
InternLM 7B - 20B Yes No Yes
Baichuan2 7B - 13B Yes No Yes
ChatGLM2 6B Yes No No
Falcon 7B - 180B Yes No No

Getting Started

Please overview getting_started section for the basic usage of LMDeploy.

For detailed user guides and advanced guides, please refer to our tutorials:

Contributing

We appreciate all contributions to LMDeploy. Please refer to CONTRIBUTING.md for the contributing guideline.

Acknowledgement

License

This project is released under the Apache 2.0 license.

About

LMDeploy is a toolkit for compressing, deploying, and serving LLMs.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Languages

  • C++ 39.3%
  • Python 31.8%
  • Cuda 26.7%
  • CMake 1.7%
  • Shell 0.4%
  • C 0.1%