The bridge will keep looping on the same generation because the
evaluation of "while not seed" will always be False when seed is 0 (or
00000000 etc)
This fixes this. Also allows to request more verbosity on the webui
command
Co-authored-by: hlky <106811348+hlky@users.noreply.github.com>
Co-authored-by: lukas5450 <46075099+lukas5450@users.noreply.github.com>
# Summary of the change
- new Scene-to-Image tab
- new scn2img function
- functions for loading and running monocular_depth_estimation with
tensorflow
# Description
(relevant motivation, which issue is fixed)
Related to discussion #925
> Would it be possible to have a layers system where we could do have
foreground, mid, and background objects which relate to one another and
share the style? So we could say generate a landscape, one another layer
generate a castle, and on another layer generate a crowd of people.
To make this work I made a prompt-based layering system in a new
"Scene-to-Image" tab.
You write a a multi-line prompt that looks like markdown, where each
section declares one layer.
It is hierarchical, so each layer can have their own child layers.
Examples: https://imgur.com/a/eUxd5qn
![](https://i.imgur.com/L61w00Q.png)
In the frontend you can find a brief documentation for the syntax,
examples and reference for the various arguments.
Here a short summary:
Sections with "prompt" and child layers are img2img, without child
layers they are txt2img.
Without "prompt" they are just images, useful for mask selection, image
composition, etc.
Images can be initialized with "color", resized with "resize" and their
position specified with "pos".
Rotation and rotation center are "rotation" and "center".
Mask can automatically be selected by color or by estimated depth based
on https://huggingface.co/spaces/atsantiago/Monocular_Depth_Filter.
![](https://i.imgur.com/8rMHWmZ.png)
# Additional dependencies that are required for this change
For mask selection by monocular depth estimation tensorflow is required
and the model must be cloned to ./src/monocular_depth_estimation/
Changes in environment.yaml:
- einops>=0.3.0
- tensorflow>=2.10.0
Einops must be allowed to be newer for tensorflow to work.
# Checklist:
- [x] I have changed the base branch to `dev`
- [x] I have performed a self-review of my own code
- [x] I have commented my code in hard-to-understand areas
- [x] I have made corresponding changes to the documentation
Co-authored-by: hlky <106811348+hlky@users.noreply.github.com>
# Adds the bridge code which when enabled turns the webui into a
headless [Stable Horde](https://stablehorde.net) instance
It adds a few new command-line args to be able to pass variables to the
bridge, as well as the possibility to set it via a variables files
`bridgeData.py`.
To start the bridge, one needs to add the `--bridge` argument to their
relauncher.py as well as any horde vars they want to specify.
On top of that this adds the loguru module as well as my tuned loguru
config. This provides a much nicer logging output and provides the
capability to save output to files for issue reports etc. For now only
the bridge is utilizing the nice format, but once it's merged, you can
start replacing `print()` with `logger.xxx()` where appropriate
To make the bridge work, I've had to add defaults to txt2img but this
should not affect anything.
# Checklist:
- [ x ] I have changed the base branch to `dev`
- [ x ] I have performed a self-review of my own code
- [ x ] I have commented my code in hard-to-understand areas
- [ x ] I have made corresponding changes to the documentation
Co-authored-by: hlky <106811348+hlky@users.noreply.github.com>
Co-authored-by: Thomas Mello <work.mello@gmail.com>
Co-authored-by: Joshua Kimsey <jkimsey95@gmail.com>
Co-authored-by: ZeroCool <ZeroCool940711@users.noreply.github.com>
The regex was not accounting properly for prompt weights that didn't begin with a leading number such as .5 or .1 and was instead splitting those off into their own prompt which got everything all screwed up.
For example, the prompt string of "Fruit:1 grapes:-.5" should parse as
[('Fruit', 1.0), ('grapes', -.5)]
but was being incorrectly parsed as
[('Fruit', 1.0), ('grapes', 1.0), ('-.5', 1.0)]
This fixes that by making the regex properly catch decimals.
* JobManager: Re-merge #611
PR #611 seems to have got lost in the shuffle after
the transition to 'dev'.
This commit re-merges the feature branch. This adds
support for viewing preview images as the image
generates, as well as cancelling in-progress images
and a couple fixes and clean-ups.
* JobManager: Clear jobs that fail to start
Sometimes if a job fails to start it will get stuck in the active job
list. This commit ensures that jobs that raise exceptions are cleared,
and also adds a start timer to clear out jobs that fail to start
within a reasonable amount of time.
* webui: display the GPU in use during startup
tell the user which GPU the code is actually going to use before spending lots of time loading everything onto the GPU
* typo
* add some info messages
* evaluate current GPU properly
* add debug flag gating
not everyone wants or needs to see debug messages :)
* add in stray debug msg
* webui: detect scoped-down GPU environment
check if we're using a scoped-down GPU environment (pynvml does not listen to CUDA_VISIBLE_DEVICES) so that we can measure memory on the correct GPU
* remove unnecessary import
* Perform masked image restoration when using GFPGAN or RealESRGAN, fixing #947.
Also fixes bug in image display when using masked image restoration with RealESRGAN.
When the image is upscaled using RealESRGAN the image restoration can not use the
original image because it has wrong resolution. In this case the image restoration
will restore the non-regenerated parts of the image with an RealESRGAN upscaled
version of the original input image.
Modifications from GFPGAN or color correction in (un)masked parts are also restored
to the original image by mask blending.
* Update scripts/webui.py
Co-authored-by: Thomas Mello <work.mello@gmail.com>
color correction is already used for loopback to prevent color drift with the first image as correction target.
the option allows to use the color correction even without loopback mode.
it helps keeping the colors similar to the input image.
* Add mask_restore option to give users the option to restore images based on mask, fixing #665.
Before commit c73fdd78 (Implement masking during sampling to improve blending, #308)
image mask was applied after sampling, resulting in masked parts that are not regenerated
to actually stay the same.
Since c73fdd78 the masked img2img will change the whole image, even in masked areas.
It gives better looking results at first glance, but will result in image degredation when
applied a few times. See issue #665.
In the workflow of using repeated masked img2img, users may want to use this options to keep the parts
of image they actually want to keep without image degradation. A final masked img2img or whole image img2img with mask_restore disabled
will give the better blending of "Implement masking during sampling".
* revert changes of a7be43ba in change_image_editor_mode
* fix ui_functions.change_image_editor_mode by adding gr.update to the end of the list it returns
* revert inserted newlines and whitespaces to match format of previous code
* improve caption of new option mask_restore
"Only modify regenerated parts of image"
* fix ui_functions.change_image_editor_mode by adding gr.update to the end of the list it returns
an old copy of the function exists in webui.py, this superflous function mistakenly was changed by the earlier commit b6a9e16b
* remove unused functions that are near duplicates of functions in ui_functions.py
* Metadata cleanup - Maintain metadata within UI
This commit, when combined with Gradio 3.2.1b1+, maintains image
metadata as an image is passed throughout the UI. For example,
if you generate an image, send it to Image Lab, upscale it, fix faces,
and then drag the resulting image back in to Image Lab, it will still
remember the image generation parameters.
When the image is saved, the metadata will be stripped from it if
save-metadata is not enabled. If the image is saved by *dragging*
out of the UI on to the filesystem it may maintain its metadata.
Note: I have ran into UI responsiveness issues with upgrading Gradio.
Seems there may be some Gradio queue management issues. *Without* the
gradio update this commit will maintain current functionality, but
will not keep meetadata when dragging an image between UI components.
* Move ImageMetadata into its own file
Cleans up webui, enables webui_streamlit et al to use it as well.
* Fix typo