Block or Report
Block or report javey-q
Contact GitHub support about this user’s behavior. Learn more about reporting abuse.
Report abuseLists (17)
Sort Name ascending (A-Z)
Stars
Language
Sort by: Recently starred
OneDiff: An out-of-the-box acceleration library for diffusion models.
Sourcetrail - free and open-source interactive source explorer
MindSpore online courses: Step into LLM
High-speed Large Language Model Serving on PCs with Consumer-grade GPUs
LightLLM is a Python-based LLM (Large Language Model) inference and serving framework, notable for its lightweight design, easy scalability, and high-speed performance.
MII makes low-latency and high-throughput inference possible, powered by DeepSpeed.
TensorRT-LLM provides users with an easy-to-use Python API to define Large Language Models (LLMs) and build TensorRT engines that contain state-of-the-art optimizations to perform inference efficie…
Accepted by New Trends in Image Restoration and Enhancement workshop (NTIRE), in conjunction with CVPR 2024.
🚀 A simple way to launch, train, and use PyTorch models on almost any device and distributed configuration, automatic mixed precision (including fp8), and easy-to-configure FSDP and DeepSpeed support
Official Code for Stable Cascade
This repo holds the competitions (information, solutions, summaries, memories) that our team has participated in
Hunyuan-DiT : A Powerful Multi-Resolution Diffusion Transformer with Fine-Grained Chinese Understanding
[TGRS 2024] DiffCR: A Fast Conditional Diffusion Framework for Cloud Removal from Optical Satellite Images
Generative Models by Stability AI
Hackable and optimized Transformers building blocks, supporting a composable construction.
Easy-to-use and high-performance NLP and LLM framework based on MindSpore, compatible with models and datasets of 🤗Huggingface.
A CUDA tutorial to make people learn CUDA program from 0
Universal LLM Deployment Engine with ML Compilation
PPL Quantization Tool (PPQ) is a powerful offline neural network quantization tool.
Large Language Model Text Generation Inference