Stable Diffusion web UI
-
Updated
Jun 10, 2024 - Python
Stable Diffusion web UI
🤗 Diffusers: State-of-the-art diffusion models for image and audio generation in PyTorch and FLAX.
🤗 PEFT: State-of-the-art Parameter-Efficient Fine-Tuning.
🪩 Create Disco Diffusion artworks in one line
An advanced singing voice synthesis system with high fidelity, expressiveness, controllability and flexibility based on DiffSinger: Singing Voice Synthesis via Shallow Diffusion Mechanism
Core Engine of Singing Voice Conversion & Singing Voice Clone
MuseV: Infinite-length and High Fidelity Virtual Human Video Generation with Visual Conditioned Parallel Denoising
[CVPR 2024 - Oral, Best Paper Award Candidate] Marigold: Repurposing Diffusion-Based Image Generators for Monocular Depth Estimation
Lumina-T2X is a unified framework for Text to Any Modality Generation
The official implementation of paper "BrushNet: A Plug-and-Play Image Inpainting Model with Decomposed Dual-Branch Diffusion"
T3Bench: Benchmarking Current Progress in Text-to-3D Generation
MiniSora: A community aims to explore the implementation path and future development direction of Sora.
Efficient Retrieval Augmentation and Generation Framework
Inpaint Anything extension performs stable diffusion inpainting on a browser UI using masks from Segment Anything.
Fast stable diffusion on CPU
A family of diffusion models for text-to-audio generation.
Generative AI for the Blender VSE: Text, video or image to video, image and audio in Blender Video Sequence Editor.
Wunjo CE: Face Swap, Lip Sync, Control Remove Objects & Text & Background, Restyling, Audio Separator, Clone Voice, TTS. Open Source, Local & Free.
Add a description, image, and links to the diffusion topic page so that developers can more easily learn about it.
To associate your repository with the diffusion topic, visit your repo's landing page and select "manage topics."