.github | ||
.idea | ||
configs | ||
data | ||
docs | ||
frontend | ||
images | ||
ldm | ||
models | ||
optimizedSD | ||
scripts | ||
_config.yml | ||
.dockerignore | ||
.env_docker.example | ||
.gitattributes | ||
.gitignore | ||
build_docker.sh | ||
CONTRIBUTING.md | ||
docker-compose.yml | ||
docker-reset.sh | ||
Dockerfile | ||
entrypoint.sh | ||
environment.yaml | ||
ext_requirements.txt | ||
LICENSE | ||
README_Docker.md | ||
README.md | ||
requirements.txt | ||
sd_requirements.txt | ||
setup.py | ||
Stable_Diffusion_v1_Model_Card.md | ||
ui_requirements.txt | ||
webui-streamlit.cmd | ||
webui.cmd | ||
webui.sh |
Web based UI for Stable Diffusion by sd-webui
Visit sd-webui's Discord Server
Installation instructions for Windows, Linux
Want to ask a question or request a feature?
Come to our Discord Server or use Discussions.
Documentation
Want to contribute?
Check the Contribution Guide
sd-webui is
Gradio
Features
Screenshots
Streamlit
Features
Screenshots
Stable Diffusion was made possible thanks to a collaboration with Stability AI and Runway and builds upon our previous work:
High-Resolution Image Synthesis with Latent Diffusion Models
Robin Rombach*,
Andreas Blattmann*,
Dominik Lorenz,
Patrick Esser,
Björn Ommer
CVPR '22 Oral
which is available on GitHub. PDF at arXiv. Please also visit our Project page.
Stable Diffusion is a latent text-to-image diffusion model. Thanks to a generous compute donation from Stability AI and support from LAION, we were able to train a Latent Diffusion Model on 512x512 images from a subset of the LAION-5B database. Similar to Google's Imagen, this model uses a frozen CLIP ViT-L/14 text encoder to condition the model on text prompts. With its 860M UNet and 123M text encoder, the model is relatively lightweight and runs on a GPU with at least 10GB VRAM. See this section below and the model card.
Stable Diffusion v1 refers to a specific configuration of the model architecture that uses a downsampling-factor 8 autoencoder with an 860M UNet and CLIP ViT-L/14 text encoder for the diffusion model. The model was pretrained on 256x256 images and then finetuned on 512x512 images.
*Note: Stable Diffusion v1 is a general text-to-image diffusion model and therefore mirrors biases and (mis-)conceptions that are present in its training data. Details on the training procedure and data, as well as the intended use of the model can be found in the corresponding model card.
-
Our codebase for the diffusion models builds heavily on OpenAI's ADM codebase and https://github.com/lucidrains/denoising-diffusion-pytorch. Thanks for open-sourcing!
-
The implementation of the transformer encoder is from x-transformers by lucidrains.
BibTeX
@misc{rombach2021highresolution,
title={High-Resolution Image Synthesis with Latent Diffusion Models},
author={Robin Rombach and Andreas Blattmann and Dominik Lorenz and Patrick Esser and Björn Ommer},
year={2021},
eprint={2112.10752},
archivePrefix={arXiv},
primaryClass={cs.CV}
}