mirror of
https://github.com/openvinotoolkit/stable-diffusion-webui.git
synced 2024-12-14 22:53:25 +03:00
Added OV, diffusers in requirements. Minor update to Note text.
This commit is contained in:
parent
1d2532eaa7
commit
11779119d7
@ -31,4 +31,5 @@ torch
|
||||
torchdiffeq
|
||||
torchsde
|
||||
transformers==4.30.0
|
||||
diffusers
|
||||
diffusers==0.18.2
|
||||
openvino==2023.1.0.dev20230728
|
||||
|
@ -29,5 +29,6 @@ torch
|
||||
torchdiffeq==0.2.3
|
||||
torchsde==0.2.5
|
||||
transformers==4.30.0
|
||||
diffusers
|
||||
diffusers==0.18.2
|
||||
openvino==2023.1.0.dev20230728
|
||||
|
||||
|
@ -723,11 +723,12 @@ class Script(scripts.Script):
|
||||
"""
|
||||
###
|
||||
### Note:
|
||||
First inference involves compilation of the model for best performance.
|
||||
Excluding the first inference (or warm up inference) is recommended for
|
||||
performance measurements. When resolution, batchsize, or device is changed,
|
||||
or samplers like DPM++ or Karras are selected, model is recompiled. Subsequent
|
||||
iterations use the cached compiled model for faster inference.
|
||||
- First inference involves compilation of the model for best performance.
|
||||
Since compilation happens only on the first run, the first inference (or warm up inference) will be slower than subsequent inferences.
|
||||
- For accurate performance measurements, it is recommended to exclude this slower first inference, as it doesn't reflect normal running time.
|
||||
- Model is recompiled when resolution, batchsize, device, or samplers like DPM++ or Karras are changed.
|
||||
After recompiling, later inferences will reuse the newly compiled model and achieve faster running times.
|
||||
So it's normal for the first inference after a settings change to be slower, while subsequent inferences use the optimized compiled model and run faster.
|
||||
""")
|
||||
|
||||
def local_config_change(choice):
|
||||
|
Loading…
Reference in New Issue
Block a user