The list of modules is as follow:
- webuit_streamlit.py: contains the main layout as well as the functions that load the css which is needed by the layout.
- webui_streamlit_old.py: contains the code for the previous version of the WebUI. Will be removed once the new UI code starts to get used and if everything works as it should.
- txt2img.py: contains the code for the txt2img tab.
- img2img.py: contains the code for the img2img tab.
- txt2vid.py: contains the code for the txt2vid tab.
- sd_utils.py: contains utility functions used by more than one module, any function that meets such condition should be placed here.
- ModelManager.py: contains the code for the Model Manager page on the sidebar menu.
- Settings.py: contains the code for the Settings page on the sidebar menu.
- home.py: contains the code for the Home tab, history and gallery implemented by @devilismyfriend.
- imglab.py: contains the code for the Image Lab tab implemented by @devilismyfriend
- Added Dynamic Preview Frequency option for the txt2vid tab which tries to find the lowest value for update_preview_frequency at which we can update the preview image during generation while at the same time minimizing the impact it has in performance.
- Added option to save a video file on the outputs/txt2vid-samples folder after the generation is complete similar to how the save_grid option works on other tabs.
- Added a video preview which shows a video on the txt2vid tab when the generation is completed.
- Formated some lines of code to make it use less space and fit on the a single screen.
- Added a script called Settings.py to the script folder in which Settings for the Setting page will be placed. Empty for now.
- Improved txt2vid speed by 2 times.
- Added DDIM scheduler.
- Added sliders for beta_start and beta_end to have more control over these parameters on the scheduler.
- Added option to select the scheduler type from scaled_linear or linear.
- Added option to save info files for the txt2vid tab and improved the information saved to include most of the parameters used to run the generation.
- You can now download any model from the huggingface website to use on the txt2vid tab, just add the name to the custom_models_list on the config file.
* webui: display the GPU in use during startup
tell the user which GPU the code is actually going to use before spending lots of time loading everything onto the GPU
* typo
* add some info messages
* evaluate current GPU properly
* add debug flag gating
not everyone wants or needs to see debug messages :)
* add in stray debug msg
* webui: detect scoped-down GPU environment
check if we're using a scoped-down GPU environment (pynvml does not listen to CUDA_VISIBLE_DEVICES) so that we can measure memory on the correct GPU
* remove unnecessary import
* Perform masked image restoration when using GFPGAN or RealESRGAN, fixing #947.
Also fixes bug in image display when using masked image restoration with RealESRGAN.
When the image is upscaled using RealESRGAN the image restoration can not use the
original image because it has wrong resolution. In this case the image restoration
will restore the non-regenerated parts of the image with an RealESRGAN upscaled
version of the original input image.
Modifications from GFPGAN or color correction in (un)masked parts are also restored
to the original image by mask blending.
* Update scripts/webui.py
Co-authored-by: Thomas Mello <work.mello@gmail.com>
color correction is already used for loopback to prevent color drift with the first image as correction target.
the option allows to use the color correction even without loopback mode.
it helps keeping the colors similar to the input image.
* Add mask_restore option to give users the option to restore images based on mask, fixing #665.
Before commit c73fdd78 (Implement masking during sampling to improve blending, #308)
image mask was applied after sampling, resulting in masked parts that are not regenerated
to actually stay the same.
Since c73fdd78 the masked img2img will change the whole image, even in masked areas.
It gives better looking results at first glance, but will result in image degredation when
applied a few times. See issue #665.
In the workflow of using repeated masked img2img, users may want to use this options to keep the parts
of image they actually want to keep without image degradation. A final masked img2img or whole image img2img with mask_restore disabled
will give the better blending of "Implement masking during sampling".
* revert changes of a7be43ba in change_image_editor_mode
* fix ui_functions.change_image_editor_mode by adding gr.update to the end of the list it returns
* revert inserted newlines and whitespaces to match format of previous code
* improve caption of new option mask_restore
"Only modify regenerated parts of image"
* fix ui_functions.change_image_editor_mode by adding gr.update to the end of the list it returns
an old copy of the function exists in webui.py, this superflous function mistakenly was changed by the earlier commit b6a9e16b
* remove unused functions that are near duplicates of functions in ui_functions.py