-
Tianjin University
- 92 Weijin Road, Nankai District, Tianjin,China
-
10:46
(UTC +08:00) - https://orcid.org/0000-0002-8958-3163
Highlights
- Pro
Stars
A Library for Advanced Deep Time Series Models.
Digital Avatar Conversational System - Linly-Talker. 😄✨ Linly-Talker is an intelligent AI system that combines large language models (LLMs) with visual models to create a novel human-AI interaction…
Official implementations for paper: DreamTalk: When Expressive Talking Head Generation Meets Diffusion Probabilistic Models
Data-OOB: Out-of-bag Estimate as a Simple and Efficient Data Value (ICML 2023)
Latex code for making neural networks diagrams
Drawing Bayesian networks, graphical models, tensors, technical frameworks, and illustrations in LaTeX.
Official Implementation for "Encoding in Style: a StyleGAN Encoder for Image-to-Image Translation" (CVPR 2021) presenting the pixel2style2pixel (pSp) framework
PITI: Pretraining is All You Need for Image-to-Image Translation
The easiest way to use deep metric learning in your application. Modular, flexible, and extensible. Written in PyTorch.
天大博士/硕士学位论文Latex模板,根据2021年版要求修改,可直接在Overleaf上运行。:star:所写的论文成功提交天津大学图书馆存档!(2021.12.24)
[CVPR 2024 Oral] InternVL Family: A Pioneering Open-Source Alternative to GPT-4o. 接近GPT-4o表现的开源多模态对话模型
Openai style api for open large language models, using LLMs just as chatgpt! Support for LLaMA, LLaMA-2, BLOOM, Falcon, Baichuan, Qwen, Xverse, SqlCoder, CodeLLaMA, ChatGLM, ChatGLM2, ChatGLM3 etc.…
Our maintained PFN repository. Come here to train SOTA PFNs.
InternLM-XComposer-2.5: A Versatile Large Vision Language Model Supporting Long-Contextual Input and Output
【CVPR 2024 Highlight】Monkey (LMM): Image Resolution and Text Label Are Important Things for Large Multi-modal Models
WebUI extension for ControlNet
MedicalGPT: Training Your Own Medical GPT Model with ChatGPT Training Pipeline. 训练医疗大模型,实现了包括增量预训练(PT)、有监督微调(SFT)、RLHF、DPO、ORPO。
🤖 Chat with your SQL database 📊. Accurate Text-to-SQL Generation via LLMs using RAG 🔄.
Code for the Interspeech 2021 paper "AST: Audio Spectrogram Transformer".
UI tool for fine-tuning and testing your own LoRA models base on LLaMA, GPT-J and more. One-click run on Google Colab. + A Gradio ChatGPT-like Chat UI to demonstrate your language models.
Prototype sample code demonstrating how we can leverage CodeLlama locally and connect it to MySQL using LangChain
Perception-Diagnosis-Optimization based on Training Dynamics
A repository that contains models, datasets, and fine-tuning techniques for DB-GPT, with the purpose of enhancing model performance in Text-to-SQL