* Added a progress bar as well as some extra info to know how the generation is going without having to check the console every time.
* - Updated the Image-to-image tab, it is now working at a basic level.
- Disabled RealESRGAN by default for the Image-to-Image tab as it is not working right now.
* Fixed the K Diffusion samplers not working as they had different callbacks than the DDIM and PLMS samplers, also removed some unnecessary code that was left over and are no longer needed now that we can use the K diffusion samplers directly.
* The GFPGAN and RealESRGAN checkboxes on the Advanced tab are no longer shown if said models are not available.
* - Implemented the basic layout for the Image-to-Image tab on the UI.
- Fixed the condition that checked if the GFPGAN and RealESRGAN models were present on their folders, it was previously checking if the folder existed instead of the model files inside of it.
- Removed the Basic tab from the Text-to-Image tab and changed the Advanced tab to be an expander, the basic tab was not actually been used on the streamlit version of the UI.
* Fixed the K Diffusion samplers not working as they had different callbacks than the DDIM and PLMS samplers, also removed some unnecessary code that was left over and are no longer needed now that we can use the K diffusion samplers directly.
* The GFPGAN and RealESRGAN checkboxes on the Advanced tab are no longer shown if said models are not available.
* Removing the need for a custom streamlit fork, as we are not using nested columns this is no longer needed.
* Moved the embeddings to the config file and changed the Ui so it only needs to call `defaults.general.fp` to get it.
* Changed the way the image preview is updated on the UI to now use the proper callback from the DDIM and PLMS samplers, for that reason we no longer need to have the code for those samplers inside the webui_streamlit.py file.
* Enhanced support for variants
I played a lot with variants and wanted to keep track of them by
extending the filename with the variant amount and variant seed.
While doing that, I found that currently, if you give a variant seed and
an image seed and generate more than one image, you end up with the
same image for all runs.
I used increasing variants with the same variant seed for creating
movies that show how a variant is increasingly deviating. I think
those are fascinating and added functionality to WebGUI.
If you now set a seed for the image and the variant and generate more
than one image, it will increase the variant amount by the initial
amount every step. So you can easily make a series of growing variants
and have them fully reproducible.
I hope I made sense :)
* Using the main seed for filenames on moving variants with same seed