Block or Report
Block or report Crescenturer7
Contact GitHub support about this user’s behavior. Learn more about reporting abuse.
Report abuseStars
Language: Python
Sort by: Most stars
Stable Diffusion web UI
FastAPI framework, high performance, easy to learn, fast to code, ready for production
The most powerful and modular stable diffusion GUI, api and backend with a graph/nodes interface.
Build and share delightful machine learning apps, all in Python. 🌟 Star to support our work!
基于大模型搭建的聊天机器人,同时支持 微信公众号、企业微信应用、飞书、钉钉 等接入,可选择GPT3.5/GPT-4o/GPT4.0/ Claude/文心一言/讯飞星火/通义千问/ Gemini/GLM-4/Claude/Kimi/LinkAI,能处理文本、语音和图片,访问操作系统和互联网,支持基于自有知识库进行定制企业智能客服。
A WebUI for Efficient Fine-Tuning of 100+ LLMs (ACL 2024)
Generative Models by Stability AI
We write your reusable computer vision tools. 💜
小红书笔记 | 评论爬虫、抖音视频 | 评论爬虫、快手视频 | 评论爬虫、B 站视频 | 评论爬虫、微博帖子 | 评论爬虫
SearXNG is a free internet metasearch engine which aggregates results from various search services and databases. Users are neither tracked nor profiled.
Build AI Assistants with memory, knowledge and tools.
Gen-AI Chat for Teams - Think ChatGPT if it had access to your team's unique knowledge.
Automate Creation of YouTube Shorts using MoviePy.
An LLM-powered knowledge curation system that researches a topic and generates a full-length report with citations.
Implementation of paper - YOLOv9: Learning What You Want to Learn Using Programmable Gradient Information
🚀 基于 LLM 大语言模型的知识库问答系统。开箱即用、模型中立、灵活编排,支持快速嵌入到第三方业务系统,1Panel 官方出品。
FreeAskInternet is a completely free, PRIVATE and LOCALLY running search aggregator & answer generate using MULTI LLMs, without GPU needed. The user can ask a question and the system will make a mu…
Simple and efficient pytorch-native transformer text generation in <1000 LOC of python.
Official implementation of OOTDiffusion: Outfitting Fusion based Latent Diffusion for Controllable Virtual Try-on
🔮 SuperDuper: Bring AI to your database! Build, deploy and manage any AI application directly with your existing data infrastructure, without moving your data. Including streaming inference, scalab…
To speed up LLMs' inference and enhance LLM's perceive of key information, compress the prompt and KV-Cache, which achieves up to 20x compression with minimal performance loss.
Latent Consistency Models: Synthesizing High-Resolution Images with Few-Step Inference
[CVPR 2024] Real-Time Open-Vocabulary Object Detection
[WIP] Layer Diffusion for WebUI (via Forge)
official repository of aiXcoder-7B Code Large Language Model