Block or Report
Block or report javey-q
Contact GitHub support about this user’s behavior. Learn more about reporting abuse.
Report abuseLists (17)
Sort Newest
Stars
Language
Sort by: Recently starred
This is a framework to evaluate your stable diffusion model
TensorRT Model Optimizer is a unified library of state-of-the-art model optimization techniques such as quantization and sparsity. It compresses deep learning models for downstream deployment frame…
optimized BERT transformer inference on NVIDIA GPU. https://arxiv.org/abs/2210.03052
Faster generation with text-to-image diffusion models.
stable diffusion, controlnet, tensorrt, accelerate
TensorRT Extension for Stable Diffusion Web UI
The first open source triton inference engine for Stable Diffusion, specifically for sdxl
Deploy stable diffusion model with onnx/tenorrt + tritonserver
[CVPR 2024] DeepCache: Accelerating Diffusion Models for Free
Best inference performance optimization framework for HuggingFace Diffusers on NVIDIA GPUs.
Material for cuda-mode lectures
OneDiff: An out-of-the-box acceleration library for diffusion models.
MindSpore online courses: Step into LLM
High-speed Large Language Model Serving on PCs with Consumer-grade GPUs
LightLLM is a Python-based LLM (Large Language Model) inference and serving framework, notable for its lightweight design, easy scalability, and high-speed performance.
MII makes low-latency and high-throughput inference possible, powered by DeepSpeed.
TensorRT-LLM provides users with an easy-to-use Python API to define Large Language Models (LLMs) and build TensorRT engines that contain state-of-the-art optimizations to perform inference efficie…
Accepted by New Trends in Image Restoration and Enhancement workshop (NTIRE), in conjunction with CVPR 2024.
🚀 A simple way to launch, train, and use PyTorch models on almost any device and distributed configuration, automatic mixed precision (including fp8), and easy-to-configure FSDP and DeepSpeed support
Official Code for Stable Cascade
This repo holds the competitions (information, solutions, summaries, memories) that our team has participated in