* Nix devShell capable of running InvokeAI's and stable-diffusion-webui flavors of SD without need to reach for pip or conda (including AMD ROCM support)
1.**Built-in way.** Inside InvokeAI's directory, run `python scripts/preload_models.py` to preload main SD weighs and support models. (Downloading Stable Diffusion Weighs will require HugginFace token)
2.**Manual way.** If you obtained SD weights from somewhere else, you can skip their download with `preload_models.py`. However, you'll have to manually create/edit `InvokeAI/configs/models.yaml` so that your models get loaded. Some example configs for SD 1.4, 1.5, 1.5-inpainting present in `models.example.yaml` .
1. Run `nix develop .#webui.{default,nvidia,amd}`, wait for shell to build
1.`.#webui.default` builds shell which overrides bare minimum required for SD to run
1.`.#webui.amd` builds shell which overrides torch packages with ROCM-enabled bin versions
1.`.#webui.nvidia` builds shell with overlay explicitly setting `cudaSupport = true` for torch
1. Obtain and place SD weights into `stable-diffusion-webui/models/Stable-diffusion/model.ckpt`
1. Inside `stable-diffusion-webui/` directory, run `python launch.py` to start web server. It should preload required models from the start. Additional models, such as CLIP, will be loaded before the first actual usage of them.
I have no intention to keep up with development pace of these apps, especially the Automatic's fork :) . However, I will ocasionally update at least InvokeAI's flake. Considering versioning, I will try to follow semver with respect to submodules as well, which means major version bump for submodule = major version bump for this flake.
Many many thanks to https://github.com/cript0nauta/pynixify which generated all the boilerplate for missing python packages.
Also thanks to https://github.com/colemickens/stable-diffusion-flake and https://github.com/skogsbrus/stable-diffusion-nix-flake for inspiration and some useful code snippets.