- Beijing
- https://www.zekangli.com
Block or Report
Block or report lizekang
Contact GitHub support about this user’s behavior. Learn more about reporting abuse.
Report abuseStars
Language
Sort by: Recently starred
MINT-1T: A one trillion token multimodal interleaved dataset.
Inferflow is an efficient and highly configurable inference engine for large language models (LLMs).
Tools for merging pretrained large language models.
Efficient implementations of state-of-the-art linear attention models in Pytorch and Triton
Virtual whiteboard for sketching hand-drawn like diagrams
Mamba-Chat: A chat LLM based on the state-space model architecture 🐍
🐫 CAMEL: Finding the Scaling Law of Agents. A multi-agent framework. https://www.camel-ai.org
A collection of GPT system prompts and various prompt injection/leaking knowledge.
DataDM is your private data assistant. Slide into your data's DMs
👾 Open source implementation of the ChatGPT Code Interpreter
[ICLR 2024] Efficient Streaming Language Models with Attention Sinks
📷 EasyPhoto | Your Smart AI Photo Generator.
GMoE could be the next backbone model for many kinds of generalization task.
“百聆”是一个基于LLaMA的语言对齐增强的英语/中文大语言模型,具有优越的英语/中文能力,在多语言和通用任务等多项测试中取得ChatGPT 90%的性能。BayLing is an English/Chinese LLM equipped with advanced language alignment, showing superior capability in English/Ch…
🔥中文 prompt 精选🔥,ChatGPT 使用指南,提升 ChatGPT 可玩性和可用性!🚀
StableLM: Stability AI Language Models
Inpaint anything using Segment Anything and inpainting models.
AutoGPT is the vision of accessible AI for everyone, to use and to build on. Our mission is to provide the tools, so that you can focus on what matters.
The repository provides code for running inference with the SegmentAnything Model (SAM), links for downloading the trained model checkpoints, and example notebooks that show how to use the model.
中文LLaMA&Alpaca大语言模型+本地CPU/GPU训练部署 (Chinese LLaMA & Alpaca LLMs)
An open platform for training, serving, and evaluating large language models. Release repo for Vicuna and Chatbot Arena.
We unified the interfaces of instruction-tuning data (e.g., CoT data), multiple LLMs and parameter-efficient methods (e.g., lora, p-tuning) together for easy use. We welcome open-source enthusiasts…
A data set based on all arXiv publications, pre-processed for NLP, including structured full-text and citation network
Development repository for the Triton language and compiler