-
student
- USA
- narain1.github.io
- @narain13579
- https://narain1.netlify.app
Lists (3)
Sort Name ascending (A-Z)
Starred repositories
how to optimize some algorithm in cuda.
🎉 Modern CUDA Learn Notes with PyTorch: fp32, fp16, bf16, fp8/int8, flash_attn, sgemm, sgemv, warp/block reduce, dot, elementwise, softmax, layernorm, rmsnorm.
Learn CUDA Programming, published by Packt
Flash Attention in ~100 lines of CUDA (forward pass only)
Several optimization methods of half-precision general matrix multiplication (HGEMM) using tensor core with WMMA API and MMA PTX instruction.
An efficient GPU support for LLM inference with x-bit quantization (e.g. FP6,FP5).
flash attention tutorial written in python, triton, cuda, cutlass
High-speed GEMV kernels, at most 2.7x speedup compared to pytorch baseline.
gevtushenko / llm.c
Forked from karpathy/llm.cLLM training in simple, raw C/CUDA
Flash Attention in raw Cuda C beating PyTorch
terrelln / dietgpu
Forked from facebookresearch/dietgpuGPU implementation of a fast generalized ANS (asymmetric numeral system) entropy encoder and decoder, with extensions for lossless compression of numerical and other data types in HPC/ML applications.