mirror of
https://github.com/sd-webui/stable-diffusion-webui.git
synced 2024-12-14 23:02:00 +03:00
docs
This commit is contained in:
parent
621b04b158
commit
3291384521
@ -1,70 +0,0 @@
|
||||
# Setting command line options
|
||||
|
||||
Edit `scripts/relauncher.py` `python scripts/webui.py` becomes `python scripts/webui.py --no-half --precision=full`
|
||||
|
||||
# List of command line options
|
||||
|
||||
```
|
||||
optional arguments:
|
||||
-h, --help show this help message and exit
|
||||
--ckpt CKPT path to checkpoint of model (default: models/ldm/stable-diffusion-v1/model.ckpt)
|
||||
--cli CLI don't launch web server, take Python function kwargs from this file. (default: None)
|
||||
--config CONFIG path to config which constructs model (default: configs/stable-diffusion/v1-inference.yaml)
|
||||
--defaults DEFAULTS path to configuration file providing UI defaults, uses same format as cli parameter (default:
|
||||
configs/webui/webui.yaml)
|
||||
--esrgan-cpu run ESRGAN on cpu (default: False)
|
||||
--esrgan-gpu ESRGAN_GPU
|
||||
run ESRGAN on specific gpu (overrides --gpu) (default: 0)
|
||||
--extra-models-cpu run extra models (GFGPAN/ESRGAN) on cpu (default: False)
|
||||
--extra-models-gpu run extra models (GFGPAN/ESRGAN) on gpu (default: False)
|
||||
--gfpgan-cpu run GFPGAN on cpu (default: False)
|
||||
--gfpgan-dir GFPGAN_DIR
|
||||
GFPGAN directory (default: ./src/gfpgan)
|
||||
--gfpgan-gpu GFPGAN_GPU
|
||||
run GFPGAN on specific gpu (overrides --gpu) (default: 0)
|
||||
--gpu GPU choose which GPU to use if you have multiple (default: 0)
|
||||
--grid-format GRID_FORMAT
|
||||
png for lossless png files; jpg:quality for lossy jpeg; webp:quality for lossy webp, or
|
||||
webp:-compression for lossless webp (default: jpg:95)
|
||||
--inbrowser automatically launch the interface in a new tab on the default browser (default: False)
|
||||
--ldsr-dir LDSR_DIR LDSR directory (default: ./src/latent-diffusion)
|
||||
--n_rows N_ROWS rows in the grid; use -1 for autodetect and 0 for n_rows to be same as batch_size (default:
|
||||
-1) (default: -1)
|
||||
--no-half do not switch the model to 16-bit floats (default: False)
|
||||
--no-progressbar-hiding
|
||||
do not hide progressbar in gradio UI (we hide it because it slows down ML if you have hardware
|
||||
accleration in browser) (default: False)
|
||||
--no-verify-input do not verify input to check if it's too long (default: False)
|
||||
--optimized-turbo alternative optimization mode that does not save as much VRAM but runs siginificantly faster
|
||||
(default: False)
|
||||
--optimized load the model onto the device piecemeal instead of all at once to reduce VRAM usage at the
|
||||
cost of performance (default: False)
|
||||
--outdir_img2img [OUTDIR_IMG2IMG]
|
||||
dir to write img2img results to (overrides --outdir) (default: None)
|
||||
--outdir_imglab [OUTDIR_IMGLAB]
|
||||
dir to write imglab results to (overrides --outdir) (default: None)
|
||||
--outdir_txt2img [OUTDIR_TXT2IMG]
|
||||
dir to write txt2img results to (overrides --outdir) (default: None)
|
||||
--outdir [OUTDIR] dir to write results to (default: None)
|
||||
--filename_format [FILENAME_FORMAT]
|
||||
filenames format (default: None)
|
||||
--port PORT choose the port for the gradio webserver to use (default: 7860)
|
||||
--precision {full,autocast}
|
||||
evaluate at this precision (default: autocast)
|
||||
--realesrgan-dir REALESRGAN_DIR
|
||||
RealESRGAN directory (default: ./src/realesrgan)
|
||||
--realesrgan-model REALESRGAN_MODEL
|
||||
Upscaling model for RealESRGAN (default: RealESRGAN_x4plus)
|
||||
--save-metadata Store generation parameters in the output png. Drop saved png into Image Lab to read
|
||||
parameters (default: False)
|
||||
--share-password SHARE_PASSWORD
|
||||
Sharing is open by default, use this to set a password. Username: webui (default: None)
|
||||
--share Should share your server on gradio.app, this allows you to use the UI from your mobile app
|
||||
(default: False)
|
||||
--skip-grid do not save a grid, only individual samples. Helpful when evaluating lots of samples (default:
|
||||
False)
|
||||
--skip-save do not save indiviual samples. For speed measurements. (default: False)
|
||||
--no-job-manager Don't use the experimental job manager on top of gradio (default: False)
|
||||
--max-jobs MAX_JOBS Maximum number of concurrent 'generate' commands (default: 1)
|
||||
--tiling Generate tiling images (default: False)
|
||||
```
|
@ -1,23 +0,0 @@
|
||||
# Custom models
|
||||
|
||||
You can use other *versions* of Stable Diffusion, and *finetunes* of Stable Diffusion.
|
||||
|
||||
## Stable Diffusion versions
|
||||
|
||||
### v1-3
|
||||
|
||||
### v1-4
|
||||
|
||||
### 🔜™️ v1-5 🔜™️
|
||||
|
||||
## Finetunes
|
||||
|
||||
## TrinArt/Trin-sama
|
||||
|
||||
### [v1](https://huggingface.co/naclbit/trinart_stable_diffusion)
|
||||
|
||||
### [v2](https://huggingface.co/naclbit/trinart_stable_diffusion_v2)
|
||||
|
||||
## [Waifu Diffusion](https://huggingface.co/hakurei/waifu-diffusion)
|
||||
|
||||
### [v2](https://storage.googleapis.com/ws-store2/wd-v1-2-full-ema.ckpt) [trimmed](https://huggingface.co/crumb/pruned-waifu-diffusion)
|
@ -1,23 +0,0 @@
|
||||
# **Upscalers**
|
||||
|
||||
### It is currently open to discussion whether all these different **upscalers** should be only usable on their respective standalone tabs.
|
||||
|
||||
### _**Why?**_
|
||||
|
||||
* When you generate a large batch of images, will every image be good enough to deserve upscaling?
|
||||
* Images are upscaled after generation, this delays the generation of the next image in the batch, it's better to wait for the batch to finish then decide whether the image needs upscaling
|
||||
* Clutter: more upscalers = more gui options
|
||||
|
||||
### What if I _need_ upscaling after generation and can't just do it as post-processing?
|
||||
|
||||
* One solution would be to hide the options by default, and have a cli switch to enable them
|
||||
* One issue with the above solution is needing to still maintain gui options for all new upscalers
|
||||
* Another issue is how to ensure people who need it know about it without people who don't need it accidentally activating it then getting unexpected results?
|
||||
|
||||
### Which upscalers are planned?
|
||||
|
||||
* **goBIG**: this was implemented but reverted due to a bug and then the decision was made to wait until other upscalers were added to reimplement
|
||||
* **SwinIR-L 4x**
|
||||
* **CodeFormer**
|
||||
* _**more, suggestions welcome**_
|
||||
* if the idea of keeping all upscalers as post-processing is accepted then a single **'Upscalers'** tab with all upscalers could simplify the process and the UI
|
Loading…
Reference in New Issue
Block a user