readme update, made packages runnable

This commit is contained in:
gbtb 2023-03-18 22:50:38 +10:00
parent b11251c2e4
commit 9c6946f12f
3 changed files with 38 additions and 29 deletions

View File

@ -12,14 +12,14 @@
Flake for running SD on NixOS
## What's done
* Nix devShell capable of running InvokeAI's and stable-diffusion-webui flavors of SD without need to reach for pip or conda (including AMD ROCM support)
* Nix flake capable of running InvokeAI's and stable-diffusion-webui flavors of SD without need to reach for pip or conda (including AMD ROCM support)
* ...???
* PROFIT
# How to use it?
## InvokeAI
1. Clone repo
1. Run `nix run -L .#invokeai.{default,amd,nvidia} -- --web`, wait for package to build
1. Run `nix run .#invokeai.{default,amd,nvidia} -- --web --root_dir "folder for configs and models"`, wait for package to build
1. `.#invokeai.default` builds package which overrides bare minimum required for SD to run
1. `.#invokeai.amd` builds package which overrides torch packages with ROCM-enabled bin versions
1. `.#invokeai.nvidia` builds package with overlay explicitly setting `cudaSupport = true` for torch
@ -31,40 +31,43 @@ Flake for running SD on NixOS
## stable-diffusion-webui aka 111AUTOMATIC111 fork
1. Clone repo
1. Clone submodule with stable-diffusion-webui
1. Run `nix develop .#webui.{default,nvidia,amd}`, wait for shell to build
1. `.#webui.default` builds shell which overrides bare minimum required for SD to run
1. `.#webui.amd` builds shell which overrides torch packages with ROCM-enabled bin versions
1. `.#webui.nvidia` builds shell with overlay explicitly setting `cudaSupport = true` for torch
1. Obtain and place SD weights into `stable-diffusion-webui/models/Stable-diffusion/model.ckpt`
1. Inside `stable-diffusion-webui/` directory, run `python launch.py` to start web server. It should preload required models from the start. Additional models, such as CLIP, will be loaded before the first actual usage of them.
1. Run `nix run .#webui.{default,nvidia,amd} -- "runtime folder for webui stuff" --ckpt-dir "folder with pre-downloaded main SD models"`, wait for packages to build
1. `.#webui.default` builds package which overrides bare minimum required for SD to run
1. `.#webui.amd` builds package which overrides torch packages with ROCM-enabled bin versions
1. `.#webui.nvidia` builds package with overlay explicitly setting `cudaSupport = true` for torch
1. Webui is not proper python package by itself, so I had to make a multi-layered wrapper script which sets required env and args. `bin/flake-launch` is a top-level wrapper, which sets default args and is running by default. `bin/launch.py` is a thin wrapper around original launch.py which only sets PYTHONPATH with required packages. Both wrappers pass additional args further down the pipeline. To list all available args you may run `nix run .#webui.amd -- "" --help`.
## ROCM shenanigans
todo
## Hardware quirks
### AMD
If you get an error `"hipErrorNoBinaryForGpu: Unable to find code object for all current devices!"`, then probably your GPU is not fully supported by ROCM (only several gpus are by default) and you have to set env variable to trick ROCM into running - `export HSA_OVERRIDE_GFX_VERSION=10.3.0`
# What's needed to be done
### Nvidia
* **Please note, that I don't have an nvidia gpu and therefore I can't test that CUDA functionality actually work. If something is broken in that department, please open an issue, or even better - submit a PR with a proposed fix.**
* xformers for CUDA hasn't been tested. Python package added to the flake, but it's missing triton compiler. It might partially work, so please test it and report back :)
- [x] devShell with CUDA support (should be trivial, but requires volunteer with NVidia GPU)
- [ ] Missing packages definitions should be submitted to Nixpkgs
- [x] Investigate ROCM device warning on startup
- [x] Apply patches so that all downloaded models would go into one specific folder
# What's (probably) needed to be done
- [ ] Most popular missing packages definitions should be submitted to Nixpkgs
- [ ] Try to make webui to use same paths and filenames for weights, as InvokeAI (through patching/args/symlinks)
- [ ] Should create a PR to pynixify with "skip-errors mode" so that no ugly patches would be necessary
- [ ] Shell hooks for initial setup?
- [x] May be this devShell should be itself turned into a package?
- [x] Add additional flavors of SD ?
- [ ] Increase reproducibility by replacing models, downloaded in runtime, to proper flake inputs
# Updates and versioning
Current versions:
- InvokeAI 2.3.1.post2
- stable-diffusion-webui 27.10.2022
- stable-diffusion-webui 12.03.2023
I have no intention to keep up with development pace of these apps, especially the Automatic's fork :) . However, I will ocasionally update at least InvokeAI's flake. Considering versioning, I will try to follow semver with respect to submodules as well, which means major version bump for submodule = major version bump for this flake.
# Acknowledgements
# Meta
## Contributions
Contributions are welcome. TODO: Explain scope
## Acknowledgements
Many many thanks to https://github.com/cript0nauta/pynixify which generated all the boilerplate for missing python packages.
Also thanks to https://github.com/colemickens/stable-diffusion-flake and https://github.com/skogsbrus/stable-diffusion-nix-flake for inspiration and some useful code snippets.
# Similar projects
todo
## Similar projects
You may want to check out [Nixified-AI](https://github.com/nixified-ai/flake). It aims to support broader range (e.g. text models) of AI models in NixOS.

View File

@ -76,13 +76,17 @@
"webui-repo": {
"flake": false,
"locked": {
"narHash": "sha256-L5aYKZl2NZgH+Ikddz9mZRvejhsa7N7svt2Rfi/DFL4=",
"type": "file",
"url": "https://github.com/gbtb/stable-diffusion-webui"
"lastModified": 1679138564,
"narHash": "sha256-h7VGcH35Pd3TAVNpNB2scqZAp0NTi7swl+l43GhvcI0=",
"owner": "gbtb",
"repo": "stable-diffusion-webui",
"rev": "bc7e54010c9781ae2d5ceb230a1ab5bfc8ba2ffe",
"type": "github"
},
"original": {
"type": "file",
"url": "https://github.com/gbtb/stable-diffusion-webui"
"owner": "gbtb",
"repo": "stable-diffusion-webui",
"type": "github"
}
}
},

View File

@ -16,7 +16,7 @@
};
webui-repo = {
#url = "github:AUTOMATIC1111/stable-diffusion-webui";
url = "https://github.com/gbtb/stable-diffusion-webui";
url = "github:gbtb/stable-diffusion-webui";
flake = false;
};
};
@ -257,6 +257,7 @@
version = "2.3.1";
src = invokeai-repo;
format = "pyproject";
meta.mainProgram = "invokeai";
propagatedBuildInputs = requirementsFor { pkgs = nixpkgs; nvidia = nixpkgs.nvidia; };
nativeBuildInputs = [ nixpkgs.pkgs.pythonRelaxDepsHook ];
pythonRelaxDeps = [ "torch" "pytorch-lightning" "flask-socketio" "flask" "dnspython" ];
@ -274,6 +275,7 @@
format = "other";
propagatedBuildInputs = requirementsFor { pkgs = nixpkgs; webui = true; nvidia = nixpkgs.nvidia; };
nativeBuildInputs = [ nixpkgs.pkgs.makeWrapper ];
meta.mainProgram = "flake-launch";
buildPhase = ''
runHook preBuild
cp -r . $out