Added OV, diffusers in requirements. Minor update to Note text.

This commit is contained in:
Ravi Panchumarthy 2023-08-09 15:31:25 -07:00 committed by GitHub
parent 1d2532eaa7
commit 11779119d7
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
3 changed files with 10 additions and 7 deletions

View File

@ -31,4 +31,5 @@ torch
torchdiffeq torchdiffeq
torchsde torchsde
transformers==4.30.0 transformers==4.30.0
diffusers diffusers==0.18.2
openvino==2023.1.0.dev20230728

View File

@ -29,5 +29,6 @@ torch
torchdiffeq==0.2.3 torchdiffeq==0.2.3
torchsde==0.2.5 torchsde==0.2.5
transformers==4.30.0 transformers==4.30.0
diffusers diffusers==0.18.2
openvino==2023.1.0.dev20230728

View File

@ -723,11 +723,12 @@ class Script(scripts.Script):
""" """
### ###
### Note: ### Note:
First inference involves compilation of the model for best performance. - First inference involves compilation of the model for best performance.
Excluding the first inference (or warm up inference) is recommended for Since compilation happens only on the first run, the first inference (or warm up inference) will be slower than subsequent inferences.
performance measurements. When resolution, batchsize, or device is changed, - For accurate performance measurements, it is recommended to exclude this slower first inference, as it doesn't reflect normal running time.
or samplers like DPM++ or Karras are selected, model is recompiled. Subsequent - Model is recompiled when resolution, batchsize, device, or samplers like DPM++ or Karras are changed.
iterations use the cached compiled model for faster inference. After recompiling, later inferences will reuse the newly compiled model and achieve faster running times.
So it's normal for the first inference after a settings change to be slower, while subsequent inferences use the optimized compiled model and run faster.
""") """)
def local_config_change(choice): def local_config_change(choice):