-
Shanghai Jiao Tong University
- Shanghai, China
Highlights
- Pro
Lists (6)
Sort Name ascending (A-Z)
Starred repositories
A series of large language models trained from scratch by developers @01-ai
Evaluating LLMs' multi-round chatting capability via assessing conversations generated by two LLM instances.
A curated list of awesome leaderboard-oriented resources for foundation models
Code for CVPR 2024 paper: Positive-Unlabeled Learning by Latent Group-Aware Meta Disambiguation
LLMs build upon Evol Insturct: WizardLM, WizardCoder, WizardMath
🐝 GPTSwarm: LLM agents as (Optimizable) Graphs
TextGrad: Automatic ''Differentiation'' via Text -- using large language models to backpropagate textual gradients.
A collection of AWESOME things about Graph-Related LLMs.
Chart-to-Text: Generating Natural Language Explanations for Charts by Adapting the Transformer Model
该仓库主要记录 LLMs 算法工程师相关的顶会论文研读笔记(多模态、PEFT、小样本QA问答、RAG、LMMs可解释性、Agents、CoT)
Dify is an open-source LLM app development platform. Dify's intuitive interface combines AI workflow, RAG pipeline, agent capabilities, model management, observability features and more, letting yo…
Data-Copilot: Bridging Billions of Data and Humans with Autonomous Workflow
Model, Code & Data for the EMNLP'23 paper "Making Large Language Models Better Data Creators"
Langflow is a low-code app builder for RAG and multi-agent AI applications. It’s Python-based and agnostic to any model, API, or database.
Official repo for the paper "Scaling Synthetic Data Creation with 1,000,000,000 Personas"
Semantic cache for LLMs. Fully integrated with LangChain and llama_index.
😎 Awesome list of tools and projects with the awesome LangChain framework
A programming framework for agentic AI 🤖
An Open-source Framework for Data-centric, Self-evolving Autonomous Language Agents
Qwen2.5 is the large language model series developed by Qwen team, Alibaba Cloud.
Official repository for Trustworthy Alignment of Retrieval-Augmented Large Language Models via Reinforcement Learning
Examples and guides for using the OpenAI API
Evals is a framework for evaluating LLMs and LLM systems, and an open-source registry of benchmarks.