- Added Dynamic Preview Frequency option for the txt2vid tab which tries to find the lowest value for update_preview_frequency at which we can update the preview image during generation while at the same time minimizing the impact it has in performance.
- Added option to save a video file on the outputs/txt2vid-samples folder after the generation is complete similar to how the save_grid option works on other tabs.
- Added a video preview which shows a video on the txt2vid tab when the generation is completed.
- Formated some lines of code to make it use less space and fit on the a single screen.
- Added a script called Settings.py to the script folder in which Settings for the Setting page will be placed. Empty for now.
- Improved txt2vid speed by 2 times.
- Added DDIM scheduler.
- Added sliders for beta_start and beta_end to have more control over these parameters on the scheduler.
- Added option to select the scheduler type from scaled_linear or linear.
- Added option to save info files for the txt2vid tab and improved the information saved to include most of the parameters used to run the generation.
- You can now download any model from the huggingface website to use on the txt2vid tab, just add the name to the custom_models_list on the config file.
* webui: display the GPU in use during startup
tell the user which GPU the code is actually going to use before spending lots of time loading everything onto the GPU
* typo
* add some info messages
* evaluate current GPU properly
* add debug flag gating
not everyone wants or needs to see debug messages :)
* add in stray debug msg
* webui: detect scoped-down GPU environment
check if we're using a scoped-down GPU environment (pynvml does not listen to CUDA_VISIBLE_DEVICES) so that we can measure memory on the correct GPU
* remove unnecessary import
* Perform masked image restoration when using GFPGAN or RealESRGAN, fixing #947.
Also fixes bug in image display when using masked image restoration with RealESRGAN.
When the image is upscaled using RealESRGAN the image restoration can not use the
original image because it has wrong resolution. In this case the image restoration
will restore the non-regenerated parts of the image with an RealESRGAN upscaled
version of the original input image.
Modifications from GFPGAN or color correction in (un)masked parts are also restored
to the original image by mask blending.
* Update scripts/webui.py
Co-authored-by: Thomas Mello <work.mello@gmail.com>
color correction is already used for loopback to prevent color drift with the first image as correction target.
the option allows to use the color correction even without loopback mode.
it helps keeping the colors similar to the input image.
* Add mask_restore option to give users the option to restore images based on mask, fixing #665.
Before commit c73fdd78 (Implement masking during sampling to improve blending, #308)
image mask was applied after sampling, resulting in masked parts that are not regenerated
to actually stay the same.
Since c73fdd78 the masked img2img will change the whole image, even in masked areas.
It gives better looking results at first glance, but will result in image degredation when
applied a few times. See issue #665.
In the workflow of using repeated masked img2img, users may want to use this options to keep the parts
of image they actually want to keep without image degradation. A final masked img2img or whole image img2img with mask_restore disabled
will give the better blending of "Implement masking during sampling".
* revert changes of a7be43ba in change_image_editor_mode
* fix ui_functions.change_image_editor_mode by adding gr.update to the end of the list it returns
* revert inserted newlines and whitespaces to match format of previous code
* improve caption of new option mask_restore
"Only modify regenerated parts of image"
* fix ui_functions.change_image_editor_mode by adding gr.update to the end of the list it returns
an old copy of the function exists in webui.py, this superflous function mistakenly was changed by the earlier commit b6a9e16b
* remove unused functions that are near duplicates of functions in ui_functions.py
* Metadata cleanup - Maintain metadata within UI
This commit, when combined with Gradio 3.2.1b1+, maintains image
metadata as an image is passed throughout the UI. For example,
if you generate an image, send it to Image Lab, upscale it, fix faces,
and then drag the resulting image back in to Image Lab, it will still
remember the image generation parameters.
When the image is saved, the metadata will be stripped from it if
save-metadata is not enabled. If the image is saved by *dragging*
out of the UI on to the filesystem it may maintain its metadata.
Note: I have ran into UI responsiveness issues with upgrading Gradio.
Seems there may be some Gradio queue management issues. *Without* the
gradio update this commit will maintain current functionality, but
will not keep meetadata when dragging an image between UI components.
* Move ImageMetadata into its own file
Cleans up webui, enables webui_streamlit et al to use it as well.
* Fix typo
I want people of all skill levels to be able to contribute
This is one way the code could be split up with the aim of making it easy to understand and contribute especially for people on the lower end of the skill spectrum
All i've done is split things, I think renaming and reorganising is still needed
- Implemented basic Text To Video tab, will continue to improve it as it is really basic right now.
- Improved the model loading, you now should see less frequently errors about it not been loaded correctly.
* #715#699#698#663#625#617#611#604 (#716)
* Update README.md
* Add sampler name to metadata (#695)
Co-authored-by: EliEron <example@example.com>
* old-dev-merge
Co-authored-by: EliEron <subanimehd@gmail.com>
Co-authored-by: EliEron <example@example.com>
* img2img-fix (#717)
* Revert "img2img-fix (#717)"
This reverts commit 70d4b1ca2a.
* img2img fixes
* Revert "img2img fixes"
This reverts commit e66eddc621.
* Revert "Revert "img2img-fix (#717)""
This reverts commit bf08b617d4.
* img2img fixed
* - Removed duplicated calls to save_sample.
- Change variables and arguments to be more self-explanatory and easier to understand what they do.
* Moved streamlit files to their proper location, before they were incorrectly added to the repository root folder.
* Added retry dependency for the streamlit version.
* Added .cmd file for easy running and updating the streamlit version of the UI.
* Removed duplicated entry for streamlit on the environment.yaml file.
* Removed some unnecessary lines from the the webui_streamlit.cmd file.
* add gfpgan folder to gitignore, auto gen by imglab
* added placeholder text similar to gradio
* added auto conversion for 4 channel PNG to RGB
* fix: regex escape characters
* Update Readme links to sd-webui when appropriate (#781)
* Update link to sd-webui when appropriate
* added LDSR instruction per devilismyfriend guide
* fix: stack overflow during recursion call (#784)
* Added option to set default sampler name from config file, will be useful for those wanting to change the default sampler and have it persist even when closing the UI and opening it again.
* Added try and except block to handle basic errors like StopException which is raised by streamlit when you hit the stop button and KeyError which happens also when stopping the generation because it tries to check the model at the end which is not loaded at that time, this can be ignored and so thats the reason for the exception.
* separate css to external file
* Added "git pull" and "git stash" to the commands run by the cmd scripts when launching the UI, this should make it so people who use it can automatically update the code from the repo and be up to date without manually using those commands everytime.
* resolve conflict with master
Co-authored-by: EliEron <subanimehd@gmail.com>
Co-authored-by: EliEron <example@example.com>
Co-authored-by: ZeroCool <ZeroCool940711@users.noreply.github.com>
Co-authored-by: ZeroCool940711 <alejandrogilelias940711@gmail.com>
Co-authored-by: Hafiidz <3688500+Hafiidz@users.noreply.github.com>
Co-authored-by: Thomas Mello <work.mello@gmail.com>