The list of modules is as follow:
- webuit_streamlit.py: contains the main layout as well as the functions that load the css which is needed by the layout.
- webui_streamlit_old.py: contains the code for the previous version of the WebUI. Will be removed once the new UI code starts to get used and if everything works as it should.
- txt2img.py: contains the code for the txt2img tab.
- img2img.py: contains the code for the img2img tab.
- txt2vid.py: contains the code for the txt2vid tab.
- sd_utils.py: contains utility functions used by more than one module, any function that meets such condition should be placed here.
- ModelManager.py: contains the code for the Model Manager page on the sidebar menu.
- Settings.py: contains the code for the Settings page on the sidebar menu.
- home.py: contains the code for the Home tab, history and gallery implemented by @devilismyfriend.
- imglab.py: contains the code for the Image Lab tab implemented by @devilismyfriend
- Added Dynamic Preview Frequency option for the txt2vid tab which tries to find the lowest value for update_preview_frequency at which we can update the preview image during generation while at the same time minimizing the impact it has in performance.
- Added option to save a video file on the outputs/txt2vid-samples folder after the generation is complete similar to how the save_grid option works on other tabs.
- Added a video preview which shows a video on the txt2vid tab when the generation is completed.
- Formated some lines of code to make it use less space and fit on the a single screen.
- Added a script called Settings.py to the script folder in which Settings for the Setting page will be placed. Empty for now.
- Improved txt2vid speed by 2 times.
- Added DDIM scheduler.
- Added sliders for beta_start and beta_end to have more control over these parameters on the scheduler.
- Added option to select the scheduler type from scaled_linear or linear.
- Added option to save info files for the txt2vid tab and improved the information saved to include most of the parameters used to run the generation.
- You can now download any model from the huggingface website to use on the txt2vid tab, just add the name to the custom_models_list on the config file.
* webui: display the GPU in use during startup
tell the user which GPU the code is actually going to use before spending lots of time loading everything onto the GPU
* typo
* add some info messages
* evaluate current GPU properly
* add debug flag gating
not everyone wants or needs to see debug messages :)
* add in stray debug msg
* webui: detect scoped-down GPU environment
check if we're using a scoped-down GPU environment (pynvml does not listen to CUDA_VISIBLE_DEVICES) so that we can measure memory on the correct GPU
* remove unnecessary import
* Perform masked image restoration when using GFPGAN or RealESRGAN, fixing #947.
Also fixes bug in image display when using masked image restoration with RealESRGAN.
When the image is upscaled using RealESRGAN the image restoration can not use the
original image because it has wrong resolution. In this case the image restoration
will restore the non-regenerated parts of the image with an RealESRGAN upscaled
version of the original input image.
Modifications from GFPGAN or color correction in (un)masked parts are also restored
to the original image by mask blending.
* Update scripts/webui.py
Co-authored-by: Thomas Mello <work.mello@gmail.com>