Flake for running SD on NixOS
Go to file
2023-03-18 21:32:31 +10:00
packages give up on making paths for webui consistent with InvokeAI, lets ship it as-is 2023-03-18 21:32:31 +10:00
.gitmodules give up on making paths for webui consistent with InvokeAI, lets ship it as-is 2023-03-18 21:32:31 +10:00
flake.lock give up on making paths for webui consistent with InvokeAI, lets ship it as-is 2023-03-18 21:32:31 +10:00
flake.nix give up on making paths for webui consistent with InvokeAI, lets ship it as-is 2023-03-18 21:32:31 +10:00
python-relax-deps-hook.sh provided all deps, made fix for pythonRelaxDepsHook,invokeai.amd is runnable now 2023-03-05 19:58:32 +10:00
README.md Surprisingly, webui now has decent overridability of settings, which helps with packaging 2023-03-12 18:49:45 +10:00

Table of contents

nix-stable-diffusion

Flake for running SD on NixOS

What's done

  • Nix devShell capable of running InvokeAI's and stable-diffusion-webui flavors of SD without need to reach for pip or conda (including AMD ROCM support)
  • ...???
  • PROFIT

How to use it?

InvokeAI

  1. Clone repo
  2. Run nix run -L .#invokeai.{default,amd,nvidia} -- --web, wait for package to build
    1. .#invokeai.default builds package which overrides bare minimum required for SD to run
    2. .#invokeai.amd builds package which overrides torch packages with ROCM-enabled bin versions
    3. .#invokeai.nvidia builds package with overlay explicitly setting cudaSupport = true for torch
  3. Weights download
    1. Built-in CLI way. Upon first launch InvokeAI will check its default config dir (~/invokeai) and suggest you to run build-in TUI startup configuration script that help you to download default models or supply existing ones to InvokeAI. Follow the instructions and finish configuration. Note: you can also pass option --root_dir to pick another location for configs/models installation. More fine-grained directory setup options also available - run nix run .#invokeai -- --help for more info.
    2. Build-in GUI way. Recent version of InvokeAI added GUI for model managing. See upstream docs on that matter.
  4. CLI arguments for invokeai itself can be supplied after -- part of the nix run command
  5. If you need to run additional scripts (like invokeai-merge, invokeai-ti), then you can run nix build .#invokeai and call those scripts manually like that: ./result/bin/invokeai-ti.

stable-diffusion-webui aka 111AUTOMATIC111 fork

  1. Clone repo
  2. Clone submodule with stable-diffusion-webui
  3. Run nix develop .#webui.{default,nvidia,amd}, wait for shell to build
    1. .#webui.default builds shell which overrides bare minimum required for SD to run
    2. .#webui.amd builds shell which overrides torch packages with ROCM-enabled bin versions
    3. .#webui.nvidia builds shell with overlay explicitly setting cudaSupport = true for torch
  4. Obtain and place SD weights into stable-diffusion-webui/models/Stable-diffusion/model.ckpt
  5. Inside stable-diffusion-webui/ directory, run python launch.py to start web server. It should preload required models from the start. Additional models, such as CLIP, will be loaded before the first actual usage of them.

ROCM shenanigans

todo

What's needed to be done

  • devShell with CUDA support (should be trivial, but requires volunteer with NVidia GPU)
  • Missing packages definitions should be submitted to Nixpkgs
  • Investigate ROCM device warning on startup
  • Apply patches so that all downloaded models would go into one specific folder
  • Should create a PR to pynixify with "skip-errors mode" so that no ugly patches would be necessary
  • Shell hooks for initial setup?
  • May be this devShell should be itself turned into a package?
  • Add additional flavors of SD ?

Updates and versioning

Current versions:

  • InvokeAI 2.3.1.post2
  • stable-diffusion-webui 27.10.2022

I have no intention to keep up with development pace of these apps, especially the Automatic's fork :) . However, I will ocasionally update at least InvokeAI's flake. Considering versioning, I will try to follow semver with respect to submodules as well, which means major version bump for submodule = major version bump for this flake.

Acknowledgements

Many many thanks to https://github.com/cript0nauta/pynixify which generated all the boilerplate for missing python packages.
Also thanks to https://github.com/colemickens/stable-diffusion-flake and https://github.com/skogsbrus/stable-diffusion-nix-flake for inspiration and some useful code snippets.

Similar projects

todo