Block or Report
Block or report ceku7
Contact GitHub support about this user’s behavior. Learn more about reporting abuse.
Report abuseStars
Language
Sort by: Recently starred
Efficient implementations of state-of-the-art linear attention models in Pytorch and Triton
Open-Sora: Democratizing Efficient Video Production for All
Official implementation of Inf-DiT: Upsampling Any-Resolution Image with Memory-Efficient Diffusion Transformer
Original reference implementation of "3D Gaussian Splatting for Real-Time Radiance Field Rendering"
Open source code for AlphaFold.
清华大学计算机系考研攻略 Guidance for postgraduate entrance examination in Department of Computer Science and Technology, Tsinghua University
A PyTorch Library for Accelerating 3D Deep Learning Research
[ECCV 2024] Single Image to 3D Textured Mesh in 10 seconds with Convolutional Reconstruction Model.
The simplest, fastest repository for training/finetuning medium-sized GPTs.
🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.
This is an inference framework for the RWKV large language model implemented purely in native PyTorch. The official native implementation is overly complex and lacks extensibility. Let's join the f…
[GPT beats diffusion🔥] [scaling laws in visual generation📈] Official impl. of "Visual Autoregressive Modeling: Scalable Image Generation via Next-Scale Prediction". An *ultra-simple, user-friendly …
A simple and efficient Mamba implementation in pure PyTorch and MLX.
Generative Models by Stability AI
MMGeneration is a powerful toolkit for generative models, based on PyTorch and MMCV.
ChatRWKV is like ChatGPT but powered by RWKV (100% RNN) language model, and open source.
RWKV is an RNN with transformer-level LLM performance. It can be directly trained like a GPT (parallelizable). So it's combining the best of RNN and transformer - great performance, fast inference,…
Vision-RWKV: Efficient and Scalable Visual Perception with RWKV-Like Architectures
[ECCV2024] VideoMamba: State Space Model for Efficient Video Understanding
Scaling RWKV-Like Architectures for Diffusion Models
Ongoing research training transformer models at scale
Remote vanilla PDB (over TCP sockets).
Making large AI models cheaper, faster and more accessible