- Shenzhen, China
- https://www.zhihu.com/people/luke-china
Stars
Repository for Meta Chameleon, a mixed-modal early-fusion foundation model from FAIR.
[NeurIPS'23 Oral] Visual Instruction Tuning (LLaVA) built towards GPT-4V level capabilities and beyond.
LLMs build upon Evol Insturct: WizardLM, WizardCoder, WizardMath
The official repo of Qwen (通义千问) chat & pretrained large language model proposed by Alibaba Cloud.
Run Effective Large Batch Contrastive Learning Beyond GPU/TPU Memory Constraint
Generative Representational Instruction Tuning
A one-stop data processing system to make data higher-quality, juicier, and more digestible for (multimodal) LLMs! 🍎 🍋 🌽 ➡️ ➡️🍸 🍹 🍷为大模型提供更高质量、更丰富、更易”消化“的数据!
Efficiently Fine-Tune 100+ LLMs in WebUI (ACL 2024)
Qwen2.5 is the large language model series developed by Qwen team, Alibaba Cloud.
MiniCPM3-4B: An edge-side LLM that surpasses GPT-3.5-Turbo.
Official release of InternLM2.5 base and chat models. 1M context support
OpenCompass is an LLM evaluation platform, supporting a wide range of models (Llama3, Mistral, InternLM2,GPT-4,LLaMa2, Qwen,GLM, Claude, etc) over 100+ datasets.
🔥中文 prompt 精选🔥,ChatGPT 使用指南,提升 ChatGPT 可玩性和可用性!🚀
🤗 PEFT: State-of-the-art Parameter-Efficient Fine-Tuning.
An unnecessarily tiny implementation of GPT-2 in NumPy.
Code for loralib, an implementation of "LoRA: Low-Rank Adaptation of Large Language Models"
BELLE: Be Everyone's Large Language model Engine(开源中文对话大模型)
JARVIS, a system to connect LLMs with ML community. Paper: https://arxiv.org/pdf/2303.17580.pdf
ChatGLM-6B: An Open Bilingual Dialogue Language Model | 开源双语对话语言模型
The repository provides code for running inference with the SegmentAnything Model (SAM), links for downloading the trained model checkpoints, and example notebooks that show how to use the model.
High-Resolution Image Synthesis with Latent Diffusion Models
The standard data-centric AI package for data quality and machine learning with messy, real-world data and labels.
We unified the interfaces of instruction-tuning data (e.g., CoT data), multiple LLMs and parameter-efficient methods (e.g., lora, p-tuning) together for easy use. We welcome open-source enthusiasts…
GPT4All: Run Local LLMs on Any Device. Open-source and available for commercial use.
💥 Fast State-of-the-Art Tokenizers optimized for Research and Production
💬 Open source machine learning framework to automate text- and voice-based conversations: NLU, dialogue management, connect to Slack, Facebook, and more - Create chatbots and voice assistants