packages | ||
.gitmodules | ||
flake.lock | ||
flake.nix | ||
python-relax-deps-hook.sh | ||
README.md |
Table of contents
nix-stable-diffusion
Flake for running SD on NixOS
What's done
- Nix flake capable of running InvokeAI's and stable-diffusion-webui flavors of SD without need to reach for pip or conda (including AMD ROCM support)
- ...???
- PROFIT
How to use it?
InvokeAI
- Clone repo
- Run
nix run .#invokeai.{default,amd} -- --web --root_dir "folder for configs and models"
, wait for package to build.#invokeai.default
builds package with default torch-bin that has CUDA-support by default.#invokeai.amd
builds package which overrides torch packages with ROCM-enabled bin versions
- Weights download
- Built-in CLI way. Upon first launch InvokeAI will check its default config dir (~/invokeai) and suggest you to run build-in TUI startup configuration script that help you to download default models or supply existing ones to InvokeAI. Follow the instructions and finish configuration. Note: you can also pass option
--root_dir
to pick another location for configs/models installation. More fine-grained directory setup options also available - runnix run .#invokeai.amd -- --help
for more info. - Build-in GUI way. Recent version of InvokeAI added GUI for model managing. See upstream docs on that matter.
- Built-in CLI way. Upon first launch InvokeAI will check its default config dir (~/invokeai) and suggest you to run build-in TUI startup configuration script that help you to download default models or supply existing ones to InvokeAI. Follow the instructions and finish configuration. Note: you can also pass option
- CLI arguments for invokeai itself can be supplied after
--
part of the nix run command - If you need to run additional scripts (like invokeai-merge, invokeai-ti), then you can run
nix build .#invokeai.amd
and call those scripts manually like that:./result/bin/invokeai-ti
.
stable-diffusion-webui aka 111AUTOMATIC111 fork
- Clone repo
- Run
nix run .#webui.{default,amd} -- --data-dir "runtime folder for webui stuff" --ckpt-dir "folder with pre-downloaded main SD models"
, wait for packages to build.#webui.default
builds package with default torch-bin that has CUDA-support by default.#webui.amd
builds package which overrides torch packages with ROCM-enabled bin versions
- Webui is not a proper python package by itself, so I had to make a multi-layered wrapper script which sets required env and args.
bin/flake-launch
is a top-level wrapper, which sets default args and is running by default.bin/launch.py
is a thin wrapper around original launch.py which only sets PYTHONPATH with required packages. Both wrappers pass additional args further down the pipeline. To list all available args you may runnix run .#webui.amd -- --help
.
Hardware quirks
AMD
If you get an error "hipErrorNoBinaryForGpu: Unable to find code object for all current devices!"
, then probably your GPU is not fully supported by ROCM (only several gpus are by default) and you have to set env variable to trick ROCM into running - export HSA_OVERRIDE_GFX_VERSION=10.3.0
Nvidia
- Please note, that I don't have an nvidia gpu and therefore I can't test that CUDA functionality actually work. If something is broken in that department, please open an issue, or even better - submit a PR with a proposed fix.
- xformers for CUDA hasn't been tested. Python package added to the flake, but it's missing triton compiler. It might partially work, so please test it and report back :)
What's (probably) needed to be done
- Most popular missing packages definitions should be submitted to Nixpkgs
- Try to make webui to use same paths and filenames for weights, as InvokeAI (through patching/args/symlinks)
- Should create a PR to pynixify with "skip-errors mode" so that no ugly patches would be necessary
- Increase reproducibility by replacing models, downloaded in runtime, to proper flake inputs
Current versions
- InvokeAI 2.3.1.post2
- stable-diffusion-webui 12.03.2023
Meta
Contributions
Contributions are welcome. I have no intention to keep up with development pace of these apps, especially the Automatic's fork :) . However, I will ocasionally update at least InvokeAI's flake. Considering versioning, I will try to follow semver with respect to submodules as well, which means major version bump for submodule = major version bump for this flake.
Acknowledgements
Many many thanks to https://github.com/cript0nauta/pynixify which generated all the boilerplate for missing python packages.
Also thanks to https://github.com/colemickens/stable-diffusion-flake and https://github.com/skogsbrus/stable-diffusion-nix-flake for inspiration and some useful code snippets.
Similar projects
You may want to check out Nixified-AI. It aims to support broader range (e.g. text models) of AI models in NixOS.