Stars
🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.
An open platform for training, serving, and evaluating large language models. Release repo for Vicuna and Chatbot Arena.
LlamaIndex is a data framework for your LLM applications
DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.
Ray is a unified framework for scaling AI and Python applications. Ray consists of a core distributed runtime and a set of AI Libraries for accelerating ML workloads.
Efficiently Fine-Tune 100+ LLMs in WebUI (ACL 2024)
Code and documentation to train Stanford's Alpaca models, and generate the data.
Pretrain, finetune and deploy AI models on multiple GPUs, TPUs with zero code changes.
A high-throughput and memory-efficient inference and serving engine for LLMs
Use ChatGPT to summarize the arXiv papers. 全流程加速科研,利用chatgpt进行论文全文总结+专业翻译+润色+审稿+审稿回复
🤗 PEFT: State-of-the-art Parameter-Efficient Fine-Tuning.
ChatGLM2-6B: An Open Bilingual Chat LLM | 开源双语对话语言模型
Distributed training framework for TensorFlow, Keras, PyTorch, and Apache MXNet.
The AI developer platform. Use Weights & Biases to train and fine-tune models, and manage models from experimentation to production.
Large Language Model Text Generation Inference
tensorboard for pytorch (and chainer, mxnet, numpy, ...)
Model parallel transformers in JAX and Haiku
Code and models for NExT-GPT: Any-to-Any Multimodal Large Language Model
Torchmetrics - Machine learning metrics for distributed, scalable PyTorch applications.
Dense Passage Retriever - is a set of tools and models for open domain Q&A task.
Efficient Retrieval Augmentation and Generation Framework
CNNs for Sentence Classification in PyTorch
Intel® AI Reference Models: contains Intel optimizations for running deep learning workloads on Intel® Xeon® Scalable processors and Intel® Data Center GPUs
RayDP provides simple APIs for running Spark on Ray and integrating Spark with AI libraries.
Easy and lightning fast training of 🤗 Transformers on Habana Gaudi processor (HPU)
Pretrain, finetune and serve LLMs on Intel platforms with Ray