Flake for running SD on NixOS
Go to file
2022-11-29 00:20:27 +10:00
InvokeAI@93cdb476d9 installed new deps for 2.1, updated submodule 2022-11-29 00:20:27 +10:00
packages installed new deps for 2.1, updated submodule 2022-11-29 00:20:27 +10:00
stable-diffusion-webui@737eb28fac fixed a bug in requirementsFor for webui, downgraded transformers to 4.19.2 for webui, updated webui to latest master 2022-10-29 22:27:31 +10:00
.gitmodules update gradio to 3.5 2022-10-25 22:56:56 +10:00
flake.lock update gradio to 3.5 2022-10-25 22:56:56 +10:00
flake.nix installed new deps for 2.1, updated submodule 2022-11-29 00:20:27 +10:00
README.md Updated readme with instructions for stable-diffusion-webui 2022-10-26 23:35:59 +10:00
webui.patch extracted patch for webui into a file 2022-10-26 23:02:03 +10:00

nix-stable-diffusion

Flake for running SD on NixOS

What's done

  • Nix devShell capable of running InvokeAI's and stable-diffusion-webui flavors of SD without need to reach for pip or conda (including AMD ROCM support)
  • ...???
  • PROFIT

How to use it?

InvokeAI

  1. Clone repo
  2. Clone submodule with InvokeAI
  3. Run nix develop .#invokeai.{default,nvidia,amd}, wait for shell to build
    1. .#invokeai.default builds shell which overrides bare minimum required for SD to run
    2. .#invokeai.amd builds shell which overrides torch packages with ROCM-enabled bin versions
    3. .#invokeai.nvidia builds shell with overlay explicitly setting cudaSupport = true for torch
  4. Inside InvokeAI's directory, run python scripts/preload_models.py to preload models (doesn't include SD weights)
  5. Obtain and place SD weights into models/ldm/stable-diffusion-v1/model.ckpt
  6. Run CLI with python scripts/invoke.py or GUI with python scripts/invoke.py --web
  7. For more detailed instructions consult https://invoke-ai.github.io/InvokeAI/installation/INSTALL_LINUX/

stable-diffusion-webui aka 111AUTOMATIC111 fork

  1. Clone repo
  2. Clone submodule with stable-diffusion-webui
  3. Run nix develop .#webui.{default,nvidia,amd}, wait for shell to build
    1. .#webui.default builds shell which overrides bare minimum required for SD to run
    2. .#webui.amd builds shell which overrides torch packages with ROCM-enabled bin versions
    3. .#webui.nvidia builds shell with overlay explicitly setting cudaSupport = true for torch
  4. Obtain and place SD weights into stable-diffusion-webui/models/Stable-diffusion/model.ckpt
  5. Inside stable-diffusion-webui/ directory, run python launch.py to start web server. It should preload required models from the start. Additional models, such as CLIP, will be loaded before the first actual usage of them.

What's needed to be done

  • devShell with CUDA support (should be trivial, but requires volunteer with NVidia GPU)
  • Missing packages definitions should be submitted to Nixpkgs
  • Investigate ROCM device warning on startup
  • Apply patches so that all downloaded models would go into one specific folder
  • Should create a PR to pynixify with "skip-errors mode" so that no ugly patches would be necessary
  • Shell hooks for initial setup?
  • May be this devShell should be itself turned into a package?
  • Add additional flavors of SD ?

Acknowledgements

Many many thanks to https://github.com/cript0nauta/pynixify which generated all the boilerplate for missing python packages.
Also thanks to https://github.com/colemickens/stable-diffusion-flake and https://github.com/skogsbrus/stable-diffusion-nix-flake for inspiration and some useful code snippets.