The list of modules is as follow:
- webuit_streamlit.py: contains the main layout as well as the functions that load the css which is needed by the layout.
- webui_streamlit_old.py: contains the code for the previous version of the WebUI. Will be removed once the new UI code starts to get used and if everything works as it should.
- txt2img.py: contains the code for the txt2img tab.
- img2img.py: contains the code for the img2img tab.
- txt2vid.py: contains the code for the txt2vid tab.
- sd_utils.py: contains utility functions used by more than one module, any function that meets such condition should be placed here.
- ModelManager.py: contains the code for the Model Manager page on the sidebar menu.
- Settings.py: contains the code for the Settings page on the sidebar menu.
- home.py: contains the code for the Home tab, history and gallery implemented by @devilismyfriend.
- imglab.py: contains the code for the Image Lab tab implemented by @devilismyfriend
- Added Dynamic Preview Frequency option for the txt2vid tab which tries to find the lowest value for update_preview_frequency at which we can update the preview image during generation while at the same time minimizing the impact it has in performance.
- Added option to save a video file on the outputs/txt2vid-samples folder after the generation is complete similar to how the save_grid option works on other tabs.
- Added a video preview which shows a video on the txt2vid tab when the generation is completed.
- Formated some lines of code to make it use less space and fit on the a single screen.
- Added a script called Settings.py to the script folder in which Settings for the Setting page will be placed. Empty for now.
- Improved txt2vid speed by 2 times.
- Added DDIM scheduler.
- Added sliders for beta_start and beta_end to have more control over these parameters on the scheduler.
- Added option to select the scheduler type from scaled_linear or linear.
- Added option to save info files for the txt2vid tab and improved the information saved to include most of the parameters used to run the generation.
- You can now download any model from the huggingface website to use on the txt2vid tab, just add the name to the custom_models_list on the config file.
* Add mask_restore option to give users the option to restore images based on mask, fixing #665.
Before commit c73fdd78 (Implement masking during sampling to improve blending, #308)
image mask was applied after sampling, resulting in masked parts that are not regenerated
to actually stay the same.
Since c73fdd78 the masked img2img will change the whole image, even in masked areas.
It gives better looking results at first glance, but will result in image degredation when
applied a few times. See issue #665.
In the workflow of using repeated masked img2img, users may want to use this options to keep the parts
of image they actually want to keep without image degradation. A final masked img2img or whole image img2img with mask_restore disabled
will give the better blending of "Implement masking during sampling".
* revert changes of a7be43ba in change_image_editor_mode
* fix ui_functions.change_image_editor_mode by adding gr.update to the end of the list it returns
* revert inserted newlines and whitespaces to match format of previous code
* improve caption of new option mask_restore
"Only modify regenerated parts of image"
* fix ui_functions.change_image_editor_mode by adding gr.update to the end of the list it returns
an old copy of the function exists in webui.py, this superflous function mistakenly was changed by the earlier commit b6a9e16b
* remove unused functions that are near duplicates of functions in ui_functions.py
- Implemented basic Text To Video tab, will continue to improve it as it is really basic right now.
- Improved the model loading, you now should see less frequently errors about it not been loaded correctly.