Added OV, diffusers in requirements. Minor update to Note text.

This commit is contained in:
Ravi Panchumarthy 2023-08-09 15:31:25 -07:00 committed by GitHub
parent 1d2532eaa7
commit 11779119d7
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
3 changed files with 10 additions and 7 deletions

View File

@ -31,4 +31,5 @@ torch
torchdiffeq
torchsde
transformers==4.30.0
diffusers
diffusers==0.18.2
openvino==2023.1.0.dev20230728

View File

@ -29,5 +29,6 @@ torch
torchdiffeq==0.2.3
torchsde==0.2.5
transformers==4.30.0
diffusers
diffusers==0.18.2
openvino==2023.1.0.dev20230728

View File

@ -723,11 +723,12 @@ class Script(scripts.Script):
"""
###
### Note:
First inference involves compilation of the model for best performance.
Excluding the first inference (or warm up inference) is recommended for
performance measurements. When resolution, batchsize, or device is changed,
or samplers like DPM++ or Karras are selected, model is recompiled. Subsequent
iterations use the cached compiled model for faster inference.
- First inference involves compilation of the model for best performance.
Since compilation happens only on the first run, the first inference (or warm up inference) will be slower than subsequent inferences.
- For accurate performance measurements, it is recommended to exclude this slower first inference, as it doesn't reflect normal running time.
- Model is recompiled when resolution, batchsize, device, or samplers like DPM++ or Karras are changed.
After recompiling, later inferences will reuse the newly compiled model and achieve faster running times.
So it's normal for the first inference after a settings change to be slower, while subsequent inferences use the optimized compiled model and run faster.
""")
def local_config_change(choice):