HazyResearch / ThunderKittens
Tile primitives for speedy kernels
See what the GitHub community is most excited about today.
Tile primitives for speedy kernels
Instant neural graphics primitives: lightning fast NeRF and more
CUDA Library Samples
FlashInfer: Kernel Library for LLM Serving
cuGraph - RAPIDS Graph Analytics Library
Flash Attention in ~100 lines of CUDA (forward pass only)
CUDA accelerated rasterization of gaussian splatting
The simplest but fast implementation of matrix multiplication in CUDA.
LLM training in simple, raw C/CUDA
[ARCHIVED] Cooperative primitives for CUDA C++. See https://github.com/NVIDIA/cccl
RAFT contains fundamental widely-used algorithms and primitives for machine learning and information retrieval. The algorithms are CUDA-accelerated and form building blocks for more easily writing high performance applications.
CUDA checkpoint and restore utility
Reference implementation of Megalodon 7B model
Causal depthwise conv1d in CUDA, with a PyTorch interface
An efficient GPU support for LLM inference with 6-bit quantization (FP6).
CUDA Kernel Benchmarking Library
NCCL Tests