mirror of
https://github.com/openvinotoolkit/stable-diffusion-webui.git
synced 2024-12-14 22:53:25 +03:00
Merge remote-tracking branch 'origin/master'
This commit is contained in:
commit
0c63aa95e1
40
README.md
40
README.md
@ -63,35 +63,37 @@ as model if it has .pth extension. Grab models from the [Model Database](https:/
|
||||
|
||||
#### Troublehooting:
|
||||
|
||||
- if your version of Python is not in PATH (or if another version is), create or modify `webui.settings.bat` in the root folder (same place as webui.bat), add the line `set PYTHON=python` to say the full path to your python executable: `set PYTHON=B:\soft\Python310\python.exe`. You can do this for python, but not for git.
|
||||
- if you get an out of memory error, refer to the section below.
|
||||
- if your version of Python is not in PATH (or if another version is), edit `webui.bat`, and modify the line `set PYTHON=python` to say the full path to your python executable, for example: `set PYTHON=B:\soft\Python310\python.exe`. You can do this for python, but not for git.
|
||||
- if you get out of memory errors and your video-card has a low amount of VRAM (4GB), create a file called `webui.custom.bat` (in the same folder as `webui.bat`) and write inside of it `webui.bat --medvram` (see below for other possible options). _From now on, **instead** of running `webui.bat`, you should run `webui.custom.bat`_
|
||||
- installer creates python virtual environment, so none of installed modules will affect your system installation of python if you had one prior to installing this.
|
||||
- to prevent the creation of virtual environment and use your system python, edit `webui.bat` replacing `set VENV_DIR=venv` with `set VENV_DIR=`.
|
||||
- webui.bat installs requirements from files `requirements_versions.txt`, which lists versions for modules specifically compatible with Python 3.10.6. If you choose to install for a different version of python, editing `webui.bat` to have `set REQS_FILE=requirements.txt` instead of `set REQS_FILE=requirements_versions.txt` may help (but I still reccomend you to just use the recommended version of python).
|
||||
- webui.bat installs requirements from files `requirements_versions.txt`, which lists versions for modules specifically compatible with Python 3.10.6. If you choose to install for a different version of python, editing `webui.bat` to have `set REQS_FILE=requirements.txt` instead of `set REQS_FILE=requirements_versions.txt` may help (but I still recommend you to just use the recommended version of python).
|
||||
- if you feel you broke something and want to reinstall from scratch, delete directories: `venv`, `repositories`.
|
||||
|
||||
### What options to use for low VRAM videocards?
|
||||
Use command line options by creating or modifying `webui.settings.bat` in the root folder (same place as webui.bat), adding a line with `set COMMANDLINE_ARGS=`, and adding the settings at the end of that line.
|
||||
For example, `set COMMANDLINE_ARGS=--medvram --opt-split-attention`.
|
||||
|
||||
- If you have 4GB VRAM and want to make 512x512 (or maybe up to 640x640) images, use `--medvram`.
|
||||
- If you have 4GB VRAM and want to make 512x512 images, but you get an out of memory error with `--medvram`, use `--medvram --opt-split-attention` instead.
|
||||
- If you have 4GB VRAM and want to make 512x512 images, and you still get an out of memory error, use `--lowvram --always-batch-cond-uncond --opt-split-attention` instead.
|
||||
- If you have 4GB VRAM and want to make images larger than you can with `--medvram`, use `--lowvram --opt-split-attention`.
|
||||
- If you have more VRAM and want to make larger images than you can usually make, use `--medvram --opt-split-attention`. You can use `--lowvram`
|
||||
also but the effect will likely be barely noticeable.
|
||||
- Otherwise, do not use any of those.
|
||||
|
||||
Extra: if you get a green screen instead of generated pictures, you have a card that doesn't support half
|
||||
precision floating point numbers. You must use `--precision full --no-half` in addition to other flags,
|
||||
and the model will take much more space in VRAM.
|
||||
|
||||
### Google collab
|
||||
|
||||
If you don't want or can't run locally, here is google collab that allows you to run the webui:
|
||||
|
||||
https://colab.research.google.com/drive/1Iy-xW9t1-OQWhb0hNxueGij8phCyluOh
|
||||
|
||||
### What options to use for low VRAM video-cards?
|
||||
Use command line options by creating or modifying `webui.settings.bat` in the root folder (same place as webui.bat), adding a line with `set COMMANDLINE_ARGS=`, and adding the settings at the end of that line.
|
||||
You can, through command line arguments, enable the various optimizations which sacrifice some/a lot of speed in favor of using less VRAM. To do so, simply create (or modify it, if you've previously created it) a file called `webui.settings.bat` _in the same folder_ as `webui.bat`. Inside there should only be one line: `webui.bat <arguments>`
|
||||
For example, `webui.bat --medvram --opt-split-attention`.
|
||||
|
||||
Here's a list of optimization arguments:
|
||||
- If you have 4GB VRAM and want to make 512x512 (or maybe up to 640x640) images, use `--medvram`.
|
||||
- If you have 4GB VRAM and want to make 512x512 images, but you get an out of memory error with `--medvram`, use `--medvram --opt-split-attention` instead.
|
||||
- If you have 4GB VRAM and want to make 512x512 images, and you still get an out of memory error, use `--lowvram --always-batch-cond-uncond --opt-split-attention` instead.
|
||||
- If you have 4GB VRAM and want to make images larger than you can with `--medvram`, use `--lowvram --opt-split-attention`.
|
||||
- If you have more VRAM and want to make larger images than you can usually make (for example 1024x1024 instead of 512x512), use `--medvram --opt-split-attention`. You can use `--lowvram`
|
||||
also but the effect will likely be barely noticeable.
|
||||
- Otherwise, do not use any of those.
|
||||
|
||||
Extra: if you get a green screen instead of generated pictures, you have a card that doesn't support half
|
||||
precision floating point numbers (Known issue with 16xx cards). You must use `--precision full --no-half` in addition to other flags,
|
||||
and the model will take much more space in VRAM (you will likely have to also use at least `--medvram`).
|
||||
|
||||
### Running online
|
||||
|
||||
Use `--share` option to run online. You will get a xxx.app.gradio link. This is the intended way to use the
|
||||
|
@ -25,16 +25,16 @@ parser.add_argument("--gfpgan-model", type=str, help="GFPGAN model file name", d
|
||||
parser.add_argument("--no-half", action='store_true', help="do not switch the model to 16-bit floats")
|
||||
parser.add_argument("--no-progressbar-hiding", action='store_true', help="do not hide progressbar in gradio UI (we hide it because it slows down ML if you have hardware accleration in browser)")
|
||||
parser.add_argument("--max-batch-count", type=int, default=16, help="maximum batch count value for the UI")
|
||||
parser.add_argument("--embeddings-dir", type=str, default='embeddings', help="embeddings dirtectory for textual inversion (default: embeddings)")
|
||||
parser.add_argument("--embeddings-dir", type=str, default='embeddings', help="embeddings directory for textual inversion (default: embeddings)")
|
||||
parser.add_argument("--allow-code", action='store_true', help="allow custom script execution from webui")
|
||||
parser.add_argument("--medvram", action='store_true', help="enable stable diffusion model optimizations for sacrficing a little speed for low VRM usage")
|
||||
parser.add_argument("--lowvram", action='store_true', help="enable stable diffusion model optimizations for sacrficing a lot of speed for very low VRM usage")
|
||||
parser.add_argument("--always-batch-cond-uncond", action='store_true', help="a workaround test; may help with speed in you use --lowvram")
|
||||
parser.add_argument("--medvram", action='store_true', help="enable stable diffusion model optimizations for sacrificing a little speed for low VRM usage")
|
||||
parser.add_argument("--lowvram", action='store_true', help="enable stable diffusion model optimizations for sacrificing a lot of speed for very low VRM usage")
|
||||
parser.add_argument("--always-batch-cond-uncond", action='store_true', help="a workaround test; may help with speed if you use --lowvram")
|
||||
parser.add_argument("--unload-gfpgan", action='store_true', help="unload GFPGAN every time after processing images. Warning: seems to cause memory leaks")
|
||||
parser.add_argument("--precision", type=str, help="evaluate at this precision", choices=["full", "autocast"], default="autocast")
|
||||
parser.add_argument("--share", action='store_true', help="use share=True for gradio and make the UI accessible through their site (doesn't work for me but you might have better luck)")
|
||||
parser.add_argument("--esrgan-models-path", type=str, help="path to directory with ESRGAN models", default=os.path.join(script_path, 'ESRGAN'))
|
||||
parser.add_argument("--opt-split-attention", action='store_true', help="enable optimization that reduced vram usage by a lot for about 10%% decrease in performance")
|
||||
parser.add_argument("--opt-split-attention", action='store_true', help="enable optimization that reduce vram usage by a lot for about 10%% decrease in performance")
|
||||
parser.add_argument("--listen", action='store_true', help="launch gradio with 0.0.0.0 as server name, allowing to respond to network requests")
|
||||
parser.add_argument("--port", type=int, help="launch gradio with given server port, you need root/admin rights for ports < 1024, defaults to 7860 if available", default=None)
|
||||
cmd_opts = parser.parse_args()
|
||||
@ -96,13 +96,13 @@ class Options:
|
||||
|
||||
data = None
|
||||
data_labels = {
|
||||
"outdir_samples": OptionInfo("", "Output dictectory for images; if empty, defaults to two directories below"),
|
||||
"outdir_txt2img_samples": OptionInfo("outputs/txt2img-images", 'Output dictectory for txt2img images'),
|
||||
"outdir_img2img_samples": OptionInfo("outputs/img2img-images", 'Output dictectory for img2img images'),
|
||||
"outdir_extras_samples": OptionInfo("outputs/extras-images", 'Output dictectory for images from extras tab'),
|
||||
"outdir_grids": OptionInfo("", "Output dictectory for grids; if empty, defaults to two directories below"),
|
||||
"outdir_txt2img_grids": OptionInfo("outputs/txt2img-grids", 'Output dictectory for txt2img grids'),
|
||||
"outdir_img2img_grids": OptionInfo("outputs/img2img-grids", 'Output dictectory for img2img grids'),
|
||||
"outdir_samples": OptionInfo("", "Output directory for images; if empty, defaults to two directories below"),
|
||||
"outdir_txt2img_samples": OptionInfo("outputs/txt2img-images", 'Output directory for txt2img images'),
|
||||
"outdir_img2img_samples": OptionInfo("outputs/img2img-images", 'Output directory for img2img images'),
|
||||
"outdir_extras_samples": OptionInfo("outputs/extras-images", 'Output directory for images from extras tab'),
|
||||
"outdir_grids": OptionInfo("", "Output directory for grids; if empty, defaults to two directories below"),
|
||||
"outdir_txt2img_grids": OptionInfo("outputs/txt2img-grids", 'Output directory for txt2img grids'),
|
||||
"outdir_img2img_grids": OptionInfo("outputs/img2img-grids", 'Output directory for img2img grids'),
|
||||
"save_to_dirs": OptionInfo(False, "When writing images/grids, create a directory with name derived from the prompt"),
|
||||
"save_to_dirs_prompt_len": OptionInfo(10, "When using above, how many words from prompt to put into directory name", gr.Slider, {"minimum": 1, "maximum": 32, "step": 1}),
|
||||
"outdir_save": OptionInfo("log/images", "Directory for saving images using the Save button"),
|
||||
|
@ -228,7 +228,7 @@ def create_ui(txt2img, img2img, run_extras, run_pnginfo):
|
||||
with gr.Column(variant='panel'):
|
||||
with gr.Group():
|
||||
txt2img_preview = gr.Image(elem_id='txt2img_preview', visible=False)
|
||||
txt2img_gallery = gr.Gallery(label='Output', elem_id='txt2img_gallery')
|
||||
txt2img_gallery = gr.Gallery(label='Output', elem_id='txt2img_gallery').style(grid=4)
|
||||
|
||||
|
||||
with gr.Group():
|
||||
@ -364,7 +364,7 @@ def create_ui(txt2img, img2img, run_extras, run_pnginfo):
|
||||
with gr.Column(variant='panel'):
|
||||
with gr.Group():
|
||||
img2img_preview = gr.Image(elem_id='img2img_preview', visible=False)
|
||||
img2img_gallery = gr.Gallery(label='Output', elem_id='img2img_gallery')
|
||||
img2img_gallery = gr.Gallery(label='Output', elem_id='img2img_gallery').style(grid=4)
|
||||
|
||||
with gr.Group():
|
||||
with gr.Row():
|
||||
|
12
script.js
12
script.js
@ -1,8 +1,8 @@
|
||||
titles = {
|
||||
"Sampling steps": "How many times to imptove the generated image itratively; higher values take longer; very low values can produce bad results",
|
||||
"Sampling steps": "How many times to improve the generated image iteratively; higher values take longer; very low values can produce bad results",
|
||||
"Sampling method": "Which algorithm to use to produce the image",
|
||||
"GFPGAN": "Restore low quality faces using GFPGAN neural network",
|
||||
"Euler a": "Euler Ancestral - very creative, each can get acompletely different pictures depending on step count, setting seps tohigher than 30-40 does not help",
|
||||
"Euler a": "Euler Ancestral - very creative, each can get a completely different picture depending on step count, setting steps to higher than 30-40 does not help",
|
||||
"DDIM": "Denoising Diffusion Implicit Models - best at inpainting",
|
||||
|
||||
"Batch count": "How many batches of images to create",
|
||||
@ -11,7 +11,7 @@ titles = {
|
||||
"Seed": "A value that determines the output of random number generator - if you create an image with same parameters and seed as another image, you'll get the same result",
|
||||
|
||||
"Inpaint a part of image": "Draw a mask over an image, and the script will regenerate the masked area with content according to prompt",
|
||||
"Loopback": "Process an image, use it as an input, repeat. Batch count determings number of iterations.",
|
||||
"Loopback": "Process an image, use it as an input, repeat. Batch count determins number of iterations.",
|
||||
"SD upscale": "Upscale image normally, split result into tiles, improve each tile using img2img, merge whole image back",
|
||||
|
||||
"Just resize": "Resize image to target resolution. Unless height and width match, you will get incorrect aspect ratio.",
|
||||
@ -37,13 +37,13 @@ titles = {
|
||||
|
||||
"None": "Do not do anything special",
|
||||
"Prompt matrix": "Separate prompts into parts using vertical pipe character (|) and the script will create a picture for every combination of them (except for the first part, which will be present in all combinations)",
|
||||
"X/Y plot": "Create a grid where images will have different parameters. Use inputs below to specify which parameterswill be shared by columns and rows",
|
||||
"Custom code": "Run python code. Advanced user only. Must run program with --allow-code for this to work",
|
||||
"X/Y plot": "Create a grid where images will have different parameters. Use inputs below to specify which parameters will be shared by columns and rows",
|
||||
"Custom code": "Run Python code. Advanced user only. Must run program with --allow-code for this to work",
|
||||
|
||||
"Prompt S/R": "Separate a list of words with commas, and the first word will be used as a keyword: script will search for this word in the prompt, and replace it with others",
|
||||
|
||||
"Tiling": "Produce an image that can be tiled.",
|
||||
"Tile overlap": "For SD upscale, how much overlap in pixels should there be between tiles. Tils overlap so that when they are merged back into one oicture, there is no clearly visible seam.",
|
||||
"Tile overlap": "For SD upscale, how much overlap in pixels should there be between tiles. Tiles overlap so that when they are merged back into one picture, there is no clearly visible seam.",
|
||||
|
||||
"Roll": "Add a random artist to the prompt.",
|
||||
}
|
||||
|
Loading…
Reference in New Issue
Block a user