Merge branch 'dev' into dependabot/pip/dev/fairscale-0.4.12

This commit is contained in:
Alejandro Gil 2022-11-02 21:48:18 -07:00 committed by GitHub
commit 8bfd91bb4d
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
89 changed files with 330886 additions and 156703 deletions

11
.idea/.gitignore vendored
View File

@ -1,11 +0,0 @@
# Default ignored files
/shelf/
/workspace.xml
# Editor-based HTTP Client requests
/httpRequests/
# Datasource local storage ignored files
/dataSources/
/dataSources.local.xml
*.pyc
.idea

View File

@ -34,7 +34,6 @@ maxMessageSize = 200
enableWebsocketCompression = false
[browser]
serverAddress = "localhost"
gatherUsageStats = false
serverPort = 8501

140
README.md
View File

@ -1,41 +1,45 @@
# <center>Web-based UI for Stable Diffusion</center>
## Created by [sd-webui](https://github.com/sd-webui)
## Created by [Sygil.Dev](https://github.com/sygil-dev)
## [Visit sd-webui's Discord Server](https://discord.gg/gyXNe4NySY) [![Discord Server](https://user-images.githubusercontent.com/5977640/190528254-9b5b4423-47ee-4f24-b4f9-fd13fba37518.png)](https://discord.gg/gyXNe4NySY)
## [Join us at Sygil.Dev's Discord Server](https://discord.gg/gyXNe4NySY) [![Discord Server](https://user-images.githubusercontent.com/5977640/190528254-9b5b4423-47ee-4f24-b4f9-fd13fba37518.png)](https://discord.gg/gyXNe4NySY)
## Installation instructions for:
- **[Windows](https://sd-webui.github.io/stable-diffusion-webui/docs/1.windows-installation.html)**
- **[Linux](https://sd-webui.github.io/stable-diffusion-webui/docs/2.linux-installation.html)**
- **[Windows](https://sygil-dev.github.io/sygil-webui/docs/1.windows-installation.html)**
- **[Linux](https://sygil-dev.github.io/sygil-webui/docs/2.linux-installation.html)**
### Want to ask a question or request a feature?
Come to our [Discord Server](https://discord.gg/gyXNe4NySY) or use [Discussions](https://github.com/sd-webui/stable-diffusion-webui/discussions).
Come to our [Discord Server](https://discord.gg/gyXNe4NySY) or use [Discussions](https://github.com/sygil-dev/sygil-webui/discussions).
## Documentation
[Documentation is located here](https://sd-webui.github.io/stable-diffusion-webui/)
[Documentation is located here](https://sygil-dev.github.io/sygil-webui/)
## Want to contribute?
Check the [Contribution Guide](CONTRIBUTING.md)
[sd-webui](https://github.com/sd-webui) is:
[Sygil-Dev](https://github.com/Sygil-Dev) main devs:
* ![hlky's avatar](https://avatars.githubusercontent.com/u/106811348?s=40&v=4) [hlky](https://github.com/hlky)
* ![ZeroCool940711's avatar](https://avatars.githubusercontent.com/u/5977640?s=40&v=4)[ZeroCool940711](https://github.com/ZeroCool940711)
* ![codedealer's avatar](https://avatars.githubusercontent.com/u/4258136?s=40&v=4)[codedealer](https://github.com/codedealer)
### Project Features:
* Two great Web UI's to choose from: Streamlit or Gradio
* No more manually typing parameters, now all you have to do is write your prompt and adjust sliders
* Built-in image enhancers and upscalers, including GFPGAN and realESRGAN
* Generator Preview: See your image as its being made
* Run additional upscaling models on CPU to save VRAM
* Textual inversion 🔥: [info](https://textual-inversion.github.io/) - requires enabling, see [here](https://github.com/hlky/sd-enable-textual-inversion), script works as usual without it enabled
* Advanced img2img editor with Mask and crop capabilities
* Mask painting 🖌️: Powerful tool for re-generating only specific parts of an image you want to change (currently Gradio only)
* More diffusion samplers 🔥🔥: A great collection of samplers to use, including:
- `k_euler` (Default)
* Textual inversion: [Reaserch Paper](https://textual-inversion.github.io/)
* K-Diffusion Samplers: A great collection of samplers to use, including:
- `k_euler`
- `k_lms`
- `k_euler_a`
- `k_dpm_2`
@ -44,57 +48,78 @@ Check the [Contribution Guide](CONTRIBUTING.md)
- `PLMS`
- `DDIM`
* Loopback ➿: Automatically feed the last generated sample back into img2img
* Prompt Weighting 🏋️: Adjust the strength of different terms in your prompt
* Selectable GPU usage with `--gpu <id>`
* Memory Monitoring 🔥: Shows VRAM usage and generation time after outputting
* Word Seeds 🔥: Use words instead of seed numbers
* CFG: Classifier free guidance scale, a feature for fine-tuning your output
* Automatic Launcher: Activate conda and run Stable Diffusion with a single command
* Lighter on VRAM: 512x512 Text2Image & Image2Image tested working on 4GB
* Loopback: Automatically feed the last generated sample back into img2img
* Prompt Weighting & Negative Prompts: Gain more control over your creations
* Selectable GPU usage from Settings tab
* Word Seeds: Use words instead of seed numbers
* Automated Launcher: Activate conda and run Stable Diffusion with a single command
* Lighter on VRAM: 512x512 Text2Image & Image2Image tested working on 4GB (with *optimized* mode enabled in Settings)
* Prompt validation: If your prompt is too long, you will get a warning in the text output field
* Copy-paste generation parameters: A text output provides generation parameters in an easy to copy-paste form for easy sharing.
* Correct seeds for batches: If you use a seed of 1000 to generate two batches of two images each, four generated images will have seeds: `1000, 1001, 1002, 1003`.
* Sequential seeds for batches: If you use a seed of 1000 to generate two batches of two images each, four generated images will have seeds: `1000, 1001, 1002, 1003`.
* Prompt matrix: Separate multiple prompts using the `|` character, and the system will produce an image for every combination of them.
* Loopback for Image2Image: A checkbox for img2img allowing to automatically feed output image as input for the next batch. Equivalent to saving output image, and replacing input image with it.
* [Gradio] Advanced img2img editor with Mask and crop capabilities
# Stable Diffusion Web UI
A fully-integrated and easy way to work with Stable Diffusion right from a browser window.
* [Gradio] Mask painting 🖌️: Powerful tool for re-generating only specific parts of an image you want to change (currently Gradio only)
# SD WebUI
An easy way to work with Stable Diffusion right from your browser.
## Streamlit
![](images/streamlit/streamlit-t2i.png)
**Features:**
- Clean UI with an easy to use design, with support for widescreen displays.
- Dynamic live preview of your generations
- Easily customizable presets right from the WebUI (Coming Soon!)
- An integrated gallery to show the generations for a prompt or session (Coming soon!)
- Better optimization VRAM usage optimization, less errors for bigger generations.
- Text2Video - Generate video clips from text prompts right from the WEb UI (WIP)
- Concepts Library - Run custom embeddings others have made via textual inversion.
- Actively being developed with new features being added and planned - Stay Tuned!
- Streamlit is now the new primary UI for the project moving forward.
- *Currently in active development and still missing some of the features present in the Gradio Interface.*
- Clean UI with an easy to use design, with support for widescreen displays
- *Dynamic live preview* of your generations
- Easily customizable defaults, right from the WebUI's Settings tab
- An integrated gallery to show the generations for a prompt
- *Optimized VRAM* usage for bigger generations or usage on lower end GPUs
- *Text to Video:* Generate video clips from text prompts right from the WebUI (WIP)
- Image to Text: Use [CLIP Interrogator](https://github.com/pharmapsychotic/clip-interrogator) to interrogate an image and get a prompt that you can use to generate a similar image using Stable Diffusion.
- *Concepts Library:* Run custom embeddings others have made via textual inversion.
- Textual Inversion training: Train your own embeddings on any photo you want and use it on your prompt.
- **Currently in development: [Stable Horde](https://stablehorde.net/) integration; ImgLab, batch inputs, & mask editor from Gradio
**Prompt Weights & Negative Prompts:**
To give a token (tag recognized by the AI) a specific or increased weight (emphasis), add `:0.##` to the prompt, where `0.##` is a decimal that will specify the weight of all tokens before the colon.
Ex: `cat:0.30, dog:0.70` or `guy riding a bicycle :0.7, incoming car :0.30`
Negative prompts can be added by using `###` , after which any tokens will be seen as negative.
Ex: `cat playing with string ### yarn` will negate `yarn` from the generated image.
Negatives are a very powerful tool to get rid of contextually similar or related topics, but **be careful when adding them since the AI might see connections you can't**, and end up outputting gibberish
**Tip:* Try using the same seed with different prompt configurations or weight values see how the AI understands them, it can lead to prompts that are more well-tuned and less prone to error.
Please see the [Streamlit Documentation](docs/4.streamlit-interface.md) to learn more.
## Gradio
## Gradio [Legacy]
![](images/gradio/gradio-t2i.png)
**Features:**
- Older UI design that is fully functional and feature complete.
- Older UI that is functional and feature complete.
- Has access to all upscaling models, including LSDR.
- Dynamic prompt entry automatically changes your generation settings based on `--params` in a prompt.
- Includes quick and easy ways to send generations to Image2Image or the Image Lab for upscaling.
- *Note, the Gradio interface is no longer being actively developed and is only receiving bug fixes.*
**Note: the Gradio interface is no longer being actively developed by Sygil.Dev and is only receiving bug fixes.**
Please see the [Gradio Documentation](docs/5.gradio-interface.md) to learn more.
## Image Upscalers
---
@ -106,8 +131,8 @@ Please see the [Gradio Documentation](docs/5.gradio-interface.md) to learn more.
Lets you improve faces in pictures using the GFPGAN model. There is a checkbox in every tab to use GFPGAN at 100%, and also a separate tab that just allows you to use GFPGAN on any picture, with a slider that controls how strong the effect is.
If you want to use GFPGAN to improve generated faces, you need to install it separately.
Download [GFPGANv1.3.pth](https://github.com/TencentARC/GFPGAN/releases/download/v1.3.0/GFPGANv1.3.pth) and put it
into the `/stable-diffusion-webui/models/gfpgan` directory.
Download [GFPGANv1.4.pth](https://github.com/TencentARC/GFPGAN/releases/download/v1.3.4/GFPGANv1.4.pth) and put it
into the `/sygil-webui/models/gfpgan` directory.
### RealESRGAN
@ -117,20 +142,24 @@ Lets you double the resolution of generated images. There is a checkbox in every
There is also a separate tab for using RealESRGAN on any picture.
Download [RealESRGAN_x4plus.pth](https://github.com/xinntao/Real-ESRGAN/releases/download/v0.1.0/RealESRGAN_x4plus.pth) and [RealESRGAN_x4plus_anime_6B.pth](https://github.com/xinntao/Real-ESRGAN/releases/download/v0.2.2.4/RealESRGAN_x4plus_anime_6B.pth).
Put them into the `stable-diffusion-webui/models/realesrgan` directory.
Put them into the `sygil-webui/models/realesrgan` directory.
### GoBig, LSDR, and GoLatent *(Currently Gradio Only)*
### LSDR
Download **LDSR** [project.yaml](https://heibox.uni-heidelberg.de/f/31a76b13ea27482981b4/?dl=1) and [model last.cpkt](https://heibox.uni-heidelberg.de/f/578df07c8fc04ffbadf3/?dl=1). Rename last.ckpt to model.ckpt and place both under `sygil-webui/models/ldsr/`
### GoBig, and GoLatent *(Currently on the Gradio version Only)*
More powerful upscalers that uses a seperate Latent Diffusion model to more cleanly upscale images.
Download **LDSR** [project.yaml](https://heibox.uni-heidelberg.de/f/31a76b13ea27482981b4/?dl=1) and [ model last.cpkt](https://heibox.uni-heidelberg.de/f/578df07c8fc04ffbadf3/?dl=1). Rename last.ckpt to model.ckpt and place both under stable-diffusion-webui/models/ldsr/
Please see the [Image Enhancers Documentation](docs/5.image_enhancers.md) to learn more.
Please see the [Image Enhancers Documentation](docs/6.image_enhancers.md) to learn more.
-----
### *Original Information From The Stable Diffusion Repo*
### *Original Information From The Stable Diffusion Repo:*
# Stable Diffusion
*Stable Diffusion was made possible thanks to a collaboration with [Stability AI](https://stability.ai/) and [Runway](https://runwayml.com/) and builds upon our previous work:*
[**High-Resolution Image Synthesis with Latent Diffusion Models**](https://ommer-lab.com/research/latent-diffusion-models/)<br/>
@ -144,7 +173,6 @@ Please see the [Image Enhancers Documentation](docs/5.image_enhancers.md) to lea
which is available on [GitHub](https://github.com/CompVis/latent-diffusion). PDF at [arXiv](https://arxiv.org/abs/2112.10752). Please also visit our [Project page](https://ommer-lab.com/research/latent-diffusion-models/).
[Stable Diffusion](#stable-diffusion-v1) is a latent text-to-image diffusion
model.
Thanks to a generous compute donation from [Stability AI](https://stability.ai/) and support from [LAION](https://laion.ai/), we were able to train a Latent Diffusion Model on 512x512 images from a subset of the [LAION-5B](https://laion.ai/blog/laion-5b/) database.
@ -164,15 +192,14 @@ then finetuned on 512x512 images.
in its training data.
Details on the training procedure and data, as well as the intended use of the model can be found in the corresponding [model card](https://huggingface.co/CompVis/stable-diffusion).
## Comments
## Comments
- Our codebase for the diffusion models builds heavily on [OpenAI's ADM codebase](https://github.com/openai/guided-diffusion)
and [https://github.com/lucidrains/denoising-diffusion-pytorch](https://github.com/lucidrains/denoising-diffusion-pytorch).
Thanks for open-sourcing!
and [https://github.com/lucidrains/denoising-diffusion-pytorch](https://github.com/lucidrains/denoising-diffusion-pytorch).
Thanks for open-sourcing!
- The implementation of the transformer encoder is from [x-transformers](https://github.com/lucidrains/x-transformers) by [lucidrains](https://github.com/lucidrains?tab=repositories).
## BibTeX
```
@ -184,7 +211,4 @@ Thanks for open-sourcing!
archivePrefix={arXiv},
primaryClass={cs.CV}
}
```

View File

@ -0,0 +1,554 @@
{
"nbformat": 4,
"nbformat_minor": 0,
"metadata": {
"colab": {
"private_outputs": true,
"provenance": [],
"collapsed_sections": [
"5-Bx4AsEoPU-",
"xMWVQOg0G1Pj"
]
},
"kernelspec": {
"name": "python3",
"display_name": "Python 3"
},
"language_info": {
"name": "python"
},
"accelerator": "GPU"
},
"cells": [
{
"cell_type": "markdown",
"source": [
"[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/Sygil-Dev/sygil-webui/blob/dev/Web_based_UI_for_Stable_Diffusion_colab.ipynb)"
],
"metadata": {
"id": "S5RoIM-5IPZJ"
}
},
{
"cell_type": "markdown",
"source": [
"# README"
],
"metadata": {
"id": "5-Bx4AsEoPU-"
}
},
{
"cell_type": "markdown",
"source": [
"###<center>Web-based UI for Stable Diffusion</center>\n",
"\n",
"## Created by [Sygil-Dev](https://github.com/Sygil-Dev)\n",
"\n",
"## [Visit Sygil-Dev's Discord Server](https://discord.gg/gyXNe4NySY) [![Discord Server](https://user-images.githubusercontent.com/5977640/190528254-9b5b4423-47ee-4f24-b4f9-fd13fba37518.png)](https://discord.gg/gyXNe4NySY)\n",
"\n",
"## Installation instructions for:\n",
"\n",
"- **[Windows](https://sygil-dev.github.io/sygil-webui/docs/1.windows-installation.html)** \n",
"- **[Linux](https://sygil-dev.github.io/sygil-webui/docs/2.linux-installation.html)**\n",
"\n",
"### Want to ask a question or request a feature?\n",
"\n",
"Come to our [Discord Server](https://discord.gg/gyXNe4NySY) or use [Discussions](https://github.com/Sygil-Dev/sygil-webui/discussions).\n",
"\n",
"## Documentation\n",
"\n",
"[Documentation is located here](https://sygil-dev.github.io/sygil-webui/)\n",
"\n",
"## Want to contribute?\n",
"\n",
"Check the [Contribution Guide](CONTRIBUTING.md)\n",
"\n",
"[Sygil-Dev](https://github.com/Sygil-Dev) main devs:\n",
"\n",
"* ![hlky's avatar](https://avatars.githubusercontent.com/u/106811348?s=40&v=4) [hlky](https://github.com/hlky)\n",
"* ![ZeroCool940711's avatar](https://avatars.githubusercontent.com/u/5977640?s=40&v=4)[ZeroCool940711](https://github.com/ZeroCool940711)\n",
"* ![codedealer's avatar](https://avatars.githubusercontent.com/u/4258136?s=40&v=4)[codedealer](https://github.com/codedealer)\n",
"\n",
"### Project Features:\n",
"\n",
"* Two great Web UI's to choose from: Streamlit or Gradio\n",
"\n",
"* No more manually typing parameters, now all you have to do is write your prompt and adjust sliders\n",
"\n",
"* Built-in image enhancers and upscalers, including GFPGAN and realESRGAN\n",
"\n",
"* Run additional upscaling models on CPU to save VRAM\n",
"\n",
"* Textual inversion 🔥: [info](https://textual-inversion.github.io/) - requires enabling, see [here](https://github.com/hlky/sd-enable-textual-inversion), script works as usual without it enabled\n",
"\n",
"* Advanced img2img editor with Mask and crop capabilities\n",
"\n",
"* Mask painting 🖌️: Powerful tool for re-generating only specific parts of an image you want to change (currently Gradio only)\n",
"\n",
"* More diffusion samplers 🔥🔥: A great collection of samplers to use, including:\n",
" \n",
" - `k_euler` (Default)\n",
" - `k_lms`\n",
" - `k_euler_a`\n",
" - `k_dpm_2`\n",
" - `k_dpm_2_a`\n",
" - `k_heun`\n",
" - `PLMS`\n",
" - `DDIM`\n",
"\n",
"* Loopback ➿: Automatically feed the last generated sample back into img2img\n",
"\n",
"* Prompt Weighting 🏋️: Adjust the strength of different terms in your prompt\n",
"\n",
"* Selectable GPU usage with `--gpu <id>`\n",
"\n",
"* Memory Monitoring 🔥: Shows VRAM usage and generation time after outputting\n",
"\n",
"* Word Seeds 🔥: Use words instead of seed numbers\n",
"\n",
"* CFG: Classifier free guidance scale, a feature for fine-tuning your output\n",
"\n",
"* Automatic Launcher: Activate conda and run Stable Diffusion with a single command\n",
"\n",
"* Lighter on VRAM: 512x512 Text2Image & Image2Image tested working on 4GB\n",
"\n",
"* Prompt validation: If your prompt is too long, you will get a warning in the text output field\n",
"\n",
"* Copy-paste generation parameters: A text output provides generation parameters in an easy to copy-paste form for easy sharing.\n",
"\n",
"* Correct seeds for batches: If you use a seed of 1000 to generate two batches of two images each, four generated images will have seeds: `1000, 1001, 1002, 1003`.\n",
"\n",
"* Prompt matrix: Separate multiple prompts using the `|` character, and the system will produce an image for every combination of them.\n",
"\n",
"* Loopback for Image2Image: A checkbox for img2img allowing to automatically feed output image as input for the next batch. Equivalent to saving output image, and replacing input image with it.\n",
"\n",
"# Stable Diffusion Web UI\n",
"\n",
"A fully-integrated and easy way to work with Stable Diffusion right from a browser window.\n",
"\n",
"## Streamlit\n",
"\n",
"![](https://github.com/aedhcarrick/sygil-webui/blob/patch-2/images/streamlit/streamlit-t2i.png?raw=1)\n",
"\n",
"**Features:**\n",
"\n",
"- Clean UI with an easy to use design, with support for widescreen displays.\n",
"- Dynamic live preview of your generations\n",
"- Easily customizable presets right from the WebUI (Coming Soon!)\n",
"- An integrated gallery to show the generations for a prompt or session (Coming soon!)\n",
"- Better optimization VRAM usage optimization, less errors for bigger generations.\n",
"- Text2Video - Generate video clips from text prompts right from the WEb UI (WIP)\n",
"- Concepts Library - Run custom embeddings others have made via textual inversion.\n",
"- Actively being developed with new features being added and planned - Stay Tuned!\n",
"- Streamlit is now the new primary UI for the project moving forward.\n",
"- *Currently in active development and still missing some of the features present in the Gradio Interface.*\n",
"\n",
"Please see the [Streamlit Documentation](docs/4.streamlit-interface.md) to learn more.\n",
"\n",
"## Gradio\n",
"\n",
"![](https://github.com/aedhcarrick/sygil-webui/blob/patch-2/images/gradio/gradio-t2i.png?raw=1)\n",
"\n",
"**Features:**\n",
"\n",
"- Older UI design that is fully functional and feature complete.\n",
"- Has access to all upscaling models, including LSDR.\n",
"- Dynamic prompt entry automatically changes your generation settings based on `--params` in a prompt.\n",
"- Includes quick and easy ways to send generations to Image2Image or the Image Lab for upscaling.\n",
"- *Note, the Gradio interface is no longer being actively developed and is only receiving bug fixes.*\n",
"\n",
"Please see the [Gradio Documentation](docs/5.gradio-interface.md) to learn more.\n",
"\n",
"## Image Upscalers\n",
"\n",
"---\n",
"\n",
"### GFPGAN\n",
"\n",
"![](https://github.com/aedhcarrick/sygil-webui/blob/patch-2/images/GFPGAN.png?raw=1)\n",
"\n",
"Lets you improve faces in pictures using the GFPGAN model. There is a checkbox in every tab to use GFPGAN at 100%, and also a separate tab that just allows you to use GFPGAN on any picture, with a slider that controls how strong the effect is.\n",
"\n",
"If you want to use GFPGAN to improve generated faces, you need to install it separately.\n",
"Download [GFPGANv1.4.pth](https://github.com/TencentARC/GFPGAN/releases/download/v1.3.4/GFPGANv1.4.pth) and put it\n",
"into the `/sygil-webui/models/gfpgan` directory. \n",
"\n",
"### RealESRGAN\n",
"\n",
"![](https://github.com/aedhcarrick/sygil-webui/blob/patch-2/images/RealESRGAN.png?raw=1)\n",
"\n",
"Lets you double the resolution of generated images. There is a checkbox in every tab to use RealESRGAN, and you can choose between the regular upscaler and the anime version.\n",
"There is also a separate tab for using RealESRGAN on any picture.\n",
"\n",
"Download [RealESRGAN_x4plus.pth](https://github.com/xinntao/Real-ESRGAN/releases/download/v0.1.0/RealESRGAN_x4plus.pth) and [RealESRGAN_x4plus_anime_6B.pth](https://github.com/xinntao/Real-ESRGAN/releases/download/v0.2.2.4/RealESRGAN_x4plus_anime_6B.pth).\n",
"Put them into the `sygil-webui/models/realesrgan` directory. \n",
"\n",
"\n",
"\n",
"### LSDR\n",
"\n",
"Download **LDSR** [project.yaml](https://heibox.uni-heidelberg.de/f/31a76b13ea27482981b4/?dl=1) and [model last.cpkt](https://heibox.uni-heidelberg.de/f/578df07c8fc04ffbadf3/?dl=1). Rename last.ckpt to model.ckpt and place both under `sygil-webui/models/ldsr/`\n",
"\n",
"### GoBig, and GoLatent *(Currently on the Gradio version Only)*\n",
"\n",
"More powerful upscalers that uses a seperate Latent Diffusion model to more cleanly upscale images.\n",
"\n",
"\n",
"\n",
"Please see the [Image Enhancers Documentation](docs/6.image_enhancers.md) to learn more.\n",
"\n",
"-----\n",
"\n",
"### *Original Information From The Stable Diffusion Repo*\n",
"\n",
"# Stable Diffusion\n",
"\n",
"*Stable Diffusion was made possible thanks to a collaboration with [Stability AI](https://stability.ai/) and [Runway](https://runwayml.com/) and builds upon our previous work:*\n",
"\n",
"[**High-Resolution Image Synthesis with Latent Diffusion Models**](https://ommer-lab.com/research/latent-diffusion-models/)<br/>\n",
"[Robin Rombach](https://github.com/rromb)\\*,\n",
"[Andreas Blattmann](https://github.com/ablattmann)\\*,\n",
"[Dominik Lorenz](https://github.com/qp-qp)\\,\n",
"[Patrick Esser](https://github.com/pesser),\n",
"[Björn Ommer](https://hci.iwr.uni-heidelberg.de/Staff/bommer)<br/>\n",
"\n",
"**CVPR '22 Oral**\n",
"\n",
"which is available on [GitHub](https://github.com/CompVis/latent-diffusion). PDF at [arXiv](https://arxiv.org/abs/2112.10752). Please also visit our [Project page](https://ommer-lab.com/research/latent-diffusion-models/).\n",
"\n",
"[Stable Diffusion](#stable-diffusion-v1) is a latent text-to-image diffusion\n",
"model.\n",
"Thanks to a generous compute donation from [Stability AI](https://stability.ai/) and support from [LAION](https://laion.ai/), we were able to train a Latent Diffusion Model on 512x512 images from a subset of the [LAION-5B](https://laion.ai/blog/laion-5b/) database. \n",
"Similar to Google's [Imagen](https://arxiv.org/abs/2205.11487), \n",
"this model uses a frozen CLIP ViT-L/14 text encoder to condition the model on text prompts.\n",
"With its 860M UNet and 123M text encoder, the model is relatively lightweight and runs on a GPU with at least 10GB VRAM.\n",
"See [this section](#stable-diffusion-v1) below and the [model card](https://huggingface.co/CompVis/stable-diffusion).\n",
"\n",
"## Stable Diffusion v1\n",
"\n",
"Stable Diffusion v1 refers to a specific configuration of the model\n",
"architecture that uses a downsampling-factor 8 autoencoder with an 860M UNet\n",
"and CLIP ViT-L/14 text encoder for the diffusion model. The model was pretrained on 256x256 images and \n",
"then finetuned on 512x512 images.\n",
"\n",
"*Note: Stable Diffusion v1 is a general text-to-image diffusion model and therefore mirrors biases and (mis-)conceptions that are present\n",
"in its training data. \n",
"Details on the training procedure and data, as well as the intended use of the model can be found in the corresponding [model card](https://huggingface.co/CompVis/stable-diffusion).\n",
"\n",
"## Comments\n",
"\n",
"- Our codebase for the diffusion models builds heavily on [OpenAI's ADM codebase](https://github.com/openai/guided-diffusion)\n",
" and [https://github.com/lucidrains/denoising-diffusion-pytorch](https://github.com/lucidrains/denoising-diffusion-pytorch). \n",
" Thanks for open-sourcing!\n",
"\n",
"- The implementation of the transformer encoder is from [x-transformers](https://github.com/lucidrains/x-transformers) by [lucidrains](https://github.com/lucidrains?tab=repositories). \n",
"\n",
"## BibTeX\n",
"\n",
"```\n",
"@misc{rombach2021highresolution,\n",
" title={High-Resolution Image Synthesis with Latent Diffusion Models}, \n",
" author={Robin Rombach and Andreas Blattmann and Dominik Lorenz and Patrick Esser and Björn Ommer},\n",
" year={2021},\n",
" eprint={2112.10752},\n",
" archivePrefix={arXiv},\n",
" primaryClass={cs.CV}\n",
"}\n",
"\n",
"```"
],
"metadata": {
"id": "z4kQYMPQn4d-"
}
},
{
"cell_type": "markdown",
"source": [
"# Config options for Colab instance\n",
"> Before running, make sure GPU backend is enabled. (Unless you plan on generating with Stable Horde)\n",
">> Runtime -> Change runtime type -> Hardware Accelerator -> GPU (Make sure to save)"
],
"metadata": {
"id": "iegma7yteERV"
}
},
{
"cell_type": "code",
"source": [
"#@markdown WebUI repo (and branch)\n",
"repo_name = \"Sygil-Dev/sygil-webui\" #@param {type:\"string\"}\n",
"repo_branch = \"dev\" #@param {type:\"string\"}\n",
"\n",
"#@markdown Mount Google Drive\n",
"mount_google_drive = True #@param {type:\"boolean\"}\n",
"save_outputs_to_drive = True #@param {type:\"boolean\"}\n",
"#@markdown Folder in Google Drive to search for custom models\n",
"MODEL_DIR = \"\" #@param {type:\"string\"}\n",
"\n",
"#@markdown Enter auth token from Huggingface.co\n",
"#@markdown >(required for downloading stable diffusion model.)\n",
"HF_TOKEN = \"\" #@param {type:\"string\"}\n",
"\n",
"#@markdown Select which models to prefetch\n",
"STABLE_DIFFUSION = True #@param {type:\"boolean\"}\n",
"WAIFU_DIFFUSION = False #@param {type:\"boolean\"}\n",
"TRINART_SD = False #@param {type:\"boolean\"}\n",
"SD_WD_LD_TRINART_MERGED = False #@param {type:\"boolean\"}\n",
"GFPGAN = True #@param {type:\"boolean\"}\n",
"REALESRGAN = True #@param {type:\"boolean\"}\n",
"LDSR = True #@param {type:\"boolean\"}\n",
"BLIP_MODEL = False #@param {type:\"boolean\"}\n",
"\n"
],
"metadata": {
"id": "OXn96M9deVtF"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"source": [
"# Setup\n",
"\n",
">Runtime will crash when installing conda. This is normal as we are forcing a restart of the runtime from code.\n",
"\n",
">Just hit \"Run All\" again. 😑"
],
"metadata": {
"id": "IZjJSr-WPNxB"
}
},
{
"cell_type": "code",
"metadata": {
"id": "eq0-E5mjSpmP"
},
"source": [
"#@title Make sure we have access to GPU backend\n",
"!nvidia-smi -L"
],
"execution_count": null,
"outputs": []
},
{
"cell_type": "code",
"source": [
"#@title Install miniConda (mamba)\n",
"!pip install condacolab\n",
"import condacolab\n",
"condacolab.install_from_url(\"https://github.com/conda-forge/miniforge/releases/download/4.14.0-0/Mambaforge-4.14.0-0-Linux-x86_64.sh\")\n",
"\n",
"import condacolab\n",
"condacolab.check()\n",
"# The runtime will crash here!!! Don't panic! We planned for this remember?"
],
"metadata": {
"id": "cDu33xkdJ5mD"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "code",
"source": [
"#@title Clone webUI repo and download font\n",
"import os\n",
"REPO_URL = os.path.join('https://github.com', repo_name)\n",
"PATH_TO_REPO = os.path.join('/content', repo_name.split('/')[1])\n",
"!git clone {REPO_URL}\n",
"%cd {PATH_TO_REPO}\n",
"!git checkout {repo_branch}\n",
"!git pull\n",
"!wget -O arial.ttf https://github.com/matomo-org/travis-scripts/blob/master/fonts/Arial.ttf?raw=true"
],
"metadata": {
"id": "pZHGf03Vp305"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "code",
"source": [
"#@title Install dependencies\n",
"!mamba install cudatoolkit=11.3 git numpy=1.22.3 pip=20.3 python=3.8.5 pytorch=1.11.0 scikit-image=0.19.2 torchvision=0.12.0 -y\n",
"!python --version\n",
"!pip install -r requirements.txt"
],
"metadata": {
"id": "dmN2igp5Yk3z"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "code",
"source": [
"#@title Install localtunnel to openGoogle's ports\n",
"!npm install localtunnel"
],
"metadata": {
"id": "Nxaxfgo_F8Am"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "code",
"source": [
"#@title Mount Google Drive (if selected)\n",
"if mount_google_drive:\n",
" # Mount google drive to store outputs.\n",
" from google.colab import drive\n",
" drive.mount('/content/drive/', force_remount=True)\n",
"\n",
"if save_outputs_to_drive:\n",
" # Make symlink to redirect downloads\n",
" OUTPUT_PATH = os.path.join('/content/drive/MyDrive', repo_name.split('/')[1], 'outputs')\n",
" os.makedirs(OUTPUT_PATH, exist_ok=True)\n",
" os.symlink(OUTPUT_PATH, os.path.join(PATH_TO_REPO, 'outputs'), target_is_directory=True)\n"
],
"metadata": {
"id": "pcSWo9Zkzbsf"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "code",
"source": [
"#@title Pre-fetch models\n",
"%cd {PATH_TO_REPO}\n",
"# make list of models we want to download\n",
"model_list = {\n",
" 'stable_diffusion': f'{STABLE_DIFFUSION}',\n",
" 'waifu_diffusion': f'{WAIFU_DIFFUSION}',\n",
" 'trinart_stable_diffusion': f'{TRINART_SD}',\n",
" 'sd_wd_ld_trinart_merged': f'{SD_WD_LD_TRINART_MERGED}',\n",
" 'gfpgan': f'{GFPGAN}',\n",
" 'realesrgan': f'{REALESRGAN}',\n",
" 'ldsr': f'{LDSR}',\n",
" 'blip_model': f'{BLIP_MODEL}'}\n",
"download_list = {k for (k,v) in model_list.items() if v == 'True'}\n",
"\n",
"# get model info (file name, download link, save location)\n",
"import yaml\n",
"from pprint import pprint\n",
"with open('configs/webui/webui_streamlit.yaml') as f:\n",
" dataMap = yaml.safe_load(f)\n",
"models = dataMap['model_manager']['models']\n",
"\n",
"# copy script from model manager\n",
"import requests, time\n",
"from requests.auth import HTTPBasicAuth\n",
"\n",
"def download_file(file_name, file_path, file_url):\n",
" os.makedirs(file_path, exist_ok=True)\n",
" if os.path.exists(os.path.join(MODEL_DIR , file_name)):\n",
" print( file_name + \"found in Google Drive\")\n",
" print( \"Creating symlink...\")\n",
" os.symlink(os.path.join(MODEL_DIR , file_name), os.path.join(file_path, file_name))\n",
" elif not os.path.exists(os.path.join(file_path , file_name)):\n",
" print( \"Downloading \" + file_name + \"...\", end=\"\" )\n",
" token = None\n",
" if \"huggingface.co\" in file_url:\n",
" token = HTTPBasicAuth('token', HF_TOKEN)\n",
" try:\n",
" with requests.get(file_url, auth = token, stream=True) as r:\n",
" starttime = time.time()\n",
" r.raise_for_status()\n",
" with open(os.path.join(file_path, file_name), 'wb') as f:\n",
" for chunk in r.iter_content(chunk_size=8192):\n",
" f.write(chunk)\n",
" if ((time.time() - starttime) % 60.0) > 2 :\n",
" starttime = time.time()\n",
" print( \".\", end=\"\" )\n",
" print( \"done\" )\n",
" print( \" \" + file_name + \" downloaded to \\'\" + file_path + \"\\'\" )\n",
" except:\n",
" print( \"Failed to download \" + file_name + \".\" )\n",
" else:\n",
" print( file_name + \" already exists.\" )\n",
"\n",
"# download models in list\n",
"for model in download_list:\n",
" model_name = models[model]['model_name']\n",
" file_info = models[model]['files']\n",
" for file in file_info:\n",
" file_name = file_info[file]['file_name']\n",
" file_url = file_info[file]['download_link']\n",
" if 'save_location' in file_info[file]:\n",
" file_path = file_info[file]['save_location']\n",
" else: \n",
" file_path = models[model]['save_location']\n",
" download_file(file_name, file_path, file_url)\n",
"\n",
"# add custom models not in list\n",
"CUSTOM_MODEL_DIR = os.path.join(PATH_TO_REPO, 'models/custom')\n",
"if MODEL_DIR != \"\":\n",
" MODEL_DIR = os.path.join('/content/drive/MyDrive', MODEL_DIR)\n",
" if os.path.exists(MODEL_DIR):\n",
" custom_models = os.listdir(MODEL_DIR)\n",
" custom_models = [m for m in custom_models if os.path.isfile(MODEL_DIR + '/' + m)]\n",
" os.makedirs(CUSTOM_MODEL_DIR, exist_ok=True)\n",
" print( \"Custom model(s) found: \" )\n",
" for m in custom_models:\n",
" print( \" \" + m )\n",
" os.symlink(os.path.join(MODEL_DIR , m), os.path.join(CUSTOM_MODEL_DIR, m))\n",
"\n"
],
"metadata": {
"id": "vMdmh81J70yA"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"source": [
"# Launch the web ui server\n",
"### (optional) JS to prevent idle timeout:\n",
"Press 'F12' OR ('CTRL' + 'SHIFT' + 'I') OR right click on this website -> inspect. Then click on the console tab and paste in the following code.\n",
"```js,\n",
"function ClickConnect(){\n",
"console.log(\"Working\");\n",
"document.querySelector(\"colab-toolbar-button#connect\").click()\n",
"}\n",
"setInterval(ClickConnect,60000)\n",
"```"
],
"metadata": {
"id": "pjIjiCuJysJI"
}
},
{
"cell_type": "code",
"source": [
"#@title Press play on the music player to keep the tab alive (Uses only 13MB of data)\n",
"%%html\n",
"<b>Press play on the music player to keep the tab alive, then start your generation below (Uses only 13MB of data)</b><br/>\n",
"<audio src=\"https://henk.tech/colabkobold/silence.m4a\" controls>"
],
"metadata": {
"id": "-WknaU7uu_q6"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "code",
"source": [
"#@title Run localtunnel and start Streamlit server. ('Ctrl' + 'left click') on link in the 'link.txt' file. (/content/link.txt)\n",
"!npx localtunnel --port 8501 &>/content/link.txt &\n",
"!streamlit run scripts/webui_streamlit.py --theme.base dark --server.headless true 2>&1 | tee -a /content/log.txt"
],
"metadata": {
"id": "5whXm2nfSZ39"
},
"execution_count": null,
"outputs": []
}
]
}

View File

@ -1,6 +1,6 @@
# This file is part of stable-diffusion-webui (https://github.com/sd-webui/stable-diffusion-webui/).
# This file is part of sygil-webui (https://github.com/Sygil-Dev/sygil-webui/).
# Copyright 2022 sd-webui team.
# Copyright 2022 Sygil-Dev team.
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as published by
# the Free Software Foundation, either version 3 of the License, or

View File

@ -1,6 +1,6 @@
# This file is part of stable-diffusion-webui (https://github.com/sd-webui/stable-diffusion-webui/).
# This file is part of sygil-webui (https://github.com/Sygil-Dev/sygil-webui/).
# Copyright 2022 sd-webui team.
# Copyright 2022 Sygil-Dev team.
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
@ -12,21 +12,23 @@
# GNU Affero General Public License for more details.
# You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
# along with this program. If not, see <http://www.gnu.org/licenses/>.
# UI defaults configuration file. It is automatically loaded if located at configs/webui/webui_streamlit.yaml.
# Any changes made here will be available automatically on the web app without having to stop it.
# You may add overrides in a file named "userconfig_streamlit.yaml" in this folder, which can contain any subset
# of the properties below.
general:
version: 1.24.6
streamlit_telemetry: False
default_theme: dark
huggingface_token: ""
huggingface_token: ''
stable_horde_api: '0000000000'
gpu: 0
outdir: outputs
default_model: "Stable Diffusion v1.4"
default_model: "Stable Diffusion v1.5"
default_model_config: "configs/stable-diffusion/v1-inference.yaml"
default_model_path: "models/ldm/stable-diffusion-v1/model.ckpt"
default_model_path: "models/ldm/stable-diffusion-v1/Stable Diffusion v1.5.ckpt"
use_sd_concepts_library: True
sd_concepts_library_folder: "models/custom/sd-concepts-library"
GFPGAN_dir: "./models/gfpgan"
@ -38,17 +40,19 @@ general:
upscaling_method: "RealESRGAN"
outdir_txt2img: outputs/txt2img
outdir_img2img: outputs/img2img
outdir_img2txt: outputs/img2txt
gfpgan_cpu: False
esrgan_cpu: False
extra_models_cpu: False
extra_models_gpu: False
gfpgan_gpu: 0
esrgan_gpu: 0
keep_all_models_loaded: False
save_metadata: True
save_format: "png"
skip_grid: False
skip_save: False
grid_format: "jpg:95"
grid_quality: 95
n_rows: -1
no_verify_input: False
no_half: False
@ -62,6 +66,13 @@ general:
update_preview: True
update_preview_frequency: 10
admin:
hide_server_setting: False
hide_browser_setting: False
debug:
enable_hydralit: False
txt2img:
prompt:
width:
@ -69,32 +80,31 @@ txt2img:
min_value: 64
max_value: 2048
step: 64
height:
value: 512
min_value: 64
max_value: 2048
step: 64
cfg_scale:
value: 7.5
min_value: 1.0
max_value: 30.0
step: 0.5
seed: ""
batch_count:
value: 1
batch_size:
value: 1
sampling_steps:
value: 30
min_value: 10
max_value: 250
step: 10
LDSR_config:
sampling_steps: 50
preDownScale: 1
@ -115,56 +125,55 @@ txt2img:
use_LDSR: False
RealESRGAN_model: "RealESRGAN_x4plus"
use_upscaling: False
variant_amount:
value: 0.0
min_value: 0.0
max_value: 1.0
step: 0.01
variant_seed: ""
write_info_files: True
txt2vid:
default_model: "CompVis/stable-diffusion-v1-4"
custom_models_list: ["CompVis/stable-diffusion-v1-4"]
default_model: "runwayml/stable-diffusion-v1-5"
custom_models_list: ["runwayml/stable-diffusion-v1-5", "CompVis/stable-diffusion-v1-4", "hakurei/waifu-diffusion"]
prompt:
width:
value: 512
min_value: 64
max_value: 2048
step: 64
height:
value: 512
min_value: 64
max_value: 2048
step: 64
cfg_scale:
value: 7.5
min_value: 1.0
max_value: 30.0
step: 0.5
batch_count:
value: 1
batch_size:
value: 1
sampling_steps:
value: 30
min_value: 10
max_value: 250
step: 10
num_inference_steps:
value: 200
min_value: 10
max_value: 500
step: 10
seed: ""
default_sampler: "k_euler"
scheduler_name: "klms"
@ -175,9 +184,11 @@ txt2vid:
normalize_prompt_weights: True
save_individual_images: True
save_video: True
save_video_on_stop: False
group_by_prompt: True
write_info_files: True
do_loop: False
use_lerp_for_text: False
save_as_jpg: False
use_GFPGAN: False
use_RealESRGAN: False
@ -188,43 +199,44 @@ txt2vid:
min_value: 0.0
max_value: 1.0
step: 0.01
variant_seed: ""
beta_start:
value: 0.00085
min_value: 0.0001
max_value: 0.0300
step: 0.0001
min_value: 0.00010
max_value: 0.03000
step: 0.00010
format: "%.5f"
beta_end:
value: 0.012
min_value: 0.0001
max_value: 0.0300
step: 0.0001
value: 0.01200
min_value: 0.00010
max_value: 0.03000
step: 0.00010
format: "%.5f"
beta_scheduler_type: "scaled_linear"
max_frames: 100
max_duration_in_seconds: 30
LDSR_config:
sampling_steps: 50
preDownScale: 1
postDownScale: 1
downsample_method: "Lanczos"
img2img:
prompt:
prompt:
sampler_name: "k_euler"
denoising_strength:
denoising_strength:
value: 0.75
min_value: 0.0
max_value: 1.0
step: 0.01
# 0: Keep masked area
# 1: Regenerate only masked area
mask_mode: 0
mask_mode: 1
noise_mode: "Matched Noise"
mask_restore: False
# 0: Just resize
# 1: Crop and resize
@ -238,49 +250,47 @@ img2img:
min_value: 64
max_value: 2048
step: 64
height:
value: 512
min_value: 64
max_value: 2048
step: 64
cfg_scale:
value: 7.5
min_value: 1.0
max_value: 30.0
step: 0.5
batch_count:
value: 1
batch_size:
value: 1
sampling_steps:
value: 30
min_value: 10
max_value: 250
step: 10
num_inference_steps:
value: 200
min_value: 10
max_value: 500
step: 10
find_noise_steps:
value: 100
min_value: 0
max_value: 500
step: 10
min_value: 100
step: 100
LDSR_config:
sampling_steps: 50
preDownScale: 1
postDownScale: 1
downsample_method: "Lanczos"
loopback: True
random_seed_loopback: True
separate_prompts: False
@ -298,36 +308,36 @@ img2img:
variant_amount: 0.0
variant_seed: ""
write_info_files: True
img2txt:
batch_size: 420
batch_size: 2000
blip_image_eval_size: 512
keep_all_models_loaded: False
concepts_library:
concepts_per_page: 12
gfpgan:
strength: 100
textual_inversion:
pretrained_model_name_or_path: "models/diffusers/stable-diffusion-v1-4"
pretrained_model_name_or_path: "models/diffusers/stable-diffusion-v1-5"
tokenizer_name: "models/clip-vit-large-patch14"
daisi_app:
running_on_daisi_io: False
model_manager:
model_manager:
models:
stable_diffusion:
model_name: "Stable Diffusion v1.4"
stable_diffusion:
model_name: "Stable Diffusion v1.5"
save_location: "./models/ldm/stable-diffusion-v1"
files:
model_ckpt:
file_name: "model.ckpt"
download_link: "https://www.googleapis.com/storage/v1/b/aai-blog-files/o/sd-v1-4.ckpt?alt=media"
file_name: "Stable Diffusion v1.5.ckpt"
download_link: "https://huggingface.co/runwayml/stable-diffusion-v1-5/resolve/main/v1-5-pruned-emaonly.ckpt"
gfpgan:
model_name: "GFPGAN"
save_location: "./models/gfpgan"
@ -343,8 +353,8 @@ model_manager:
file_name: "parsing_parsenet.pth"
save_location: "./gfpgan/weights"
download_link: "https://github.com/xinntao/facexlib/releases/download/v0.2.2/parsing_parsenet.pth"
realesrgan:
model_name: "RealESRGAN"
save_location: "./models/realesrgan"
@ -355,17 +365,17 @@ model_manager:
x4plus_anime_6b:
file_name: "RealESRGAN_x4plus_anime_6B.pth"
download_link: "https://github.com/xinntao/Real-ESRGAN/releases/download/v0.2.2.4/RealESRGAN_x4plus_anime_6B.pth"
waifu_diffusion:
model_name: "Waifu Diffusion v1.2"
model_name: "Waifu Diffusion v1.3"
save_location: "./models/custom"
files:
waifu_diffusion:
file_name: "waifu-diffusion.ckpt"
download_link: "https://huggingface.co/crumb/pruned-waifu-diffusion/resolve/main/model-pruned.ckpt"
file_name: "Waifu-Diffusion-v1-3 Full ema.ckpt"
download_link: "https://huggingface.co/hakurei/waifu-diffusion-v1-3/resolve/main/wd-v1-3-full.ckpt"
trinart_stable_diffusion:
model_name: "TrinArt Stable Diffusion v2"
save_location: "./models/custom"
@ -373,15 +383,23 @@ model_manager:
trinart:
file_name: "trinart.ckpt"
download_link: "https://huggingface.co/naclbit/trinart_stable_diffusion_v2/resolve/main/trinart2_step95000.ckpt"
sd_wd_ld_trinart_merged:
model_name: "SD1.5-WD1.3-LD-Trinart-Merged"
save_location: "./models/custom"
files:
sd_wd_ld_trinart_merged:
file_name: "SD1.5-WD1.3-LD-Trinart-Merged.ckpt"
download_link: "https://huggingface.co/ZeroCool94/sd1.5-wd1.3-ld-trinart-merged/resolve/main/SD1.5-WD1.3-LD-Trinart-Merged.ckpt"
stable_diffusion_concept_library:
model_name: "Stable Diffusion Concept Library"
save_location: "./models/custom/sd-concepts-library/"
files:
concept_library:
file_name: ""
download_link: "https://github.com/sd-webui/sd-concepts-library"
download_link: "https://github.com/Sygil-Dev/sd-concepts-library"
blip_model:
model_name: "Blip Model"
save_location: "./models/blip"
@ -389,7 +407,7 @@ model_manager:
blip:
file_name: "model__base_caption.pth"
download_link: "https://storage.googleapis.com/sfr-vision-language-research/BLIP/models/model*_base_caption.pth"
ldsr:
model_name: "Latent Diffusion Super Resolution (LDSR)"
save_location: "./models/ldsr"
@ -397,8 +415,7 @@ model_manager:
project_yaml:
file_name: "project.yaml"
download_link: "https://heibox.uni-heidelberg.de/f/31a76b13ea27482981b4/?dl=1"
ldsr_model:
file_name: "model.ckpt"
download_link: "https://heibox.uni-heidelberg.de/f/578df07c8fc04ffbadf3/?dl=1"

View File

@ -5,14 +5,14 @@ print (os.getcwd)
try:
with open("environment.yaml") as file_handle:
environment_data = yaml.load(file_handle, Loader=yaml.FullLoader)
environment_data = yaml.safe_load(file_handle, Loader=yaml.FullLoader)
except FileNotFoundError:
try:
with open(os.path.join("..", "environment.yaml")) as file_handle:
environment_data = yaml.load(file_handle, Loader=yaml.FullLoader)
environment_data = yaml.safe_load(file_handle, Loader=yaml.FullLoader)
except:
pass
try:
for dependency in environment_data["dependencies"]:
package_name, package_version = dependency.split("=")
@ -21,6 +21,6 @@ except:
pass
try:
subprocess.run(['python', '-m', 'streamlit', "run" ,os.path.join("..","scripts/webui_streamlit.py"), "--theme.base dark"], stdout=subprocess.DEVNULL)
subprocess.run(['python', '-m', 'streamlit', "run" ,os.path.join("..","scripts/webui_streamlit.py"), "--theme.base dark"], stdout=subprocess.DEVNULL)
except FileExistsError:
subprocess.run(['python', '-m', 'streamlit', "run" ,"scripts/webui_streamlit.py", "--theme.base dark"], stdout=subprocess.DEVNULL)
subprocess.run(['python', '-m', 'streamlit', "run" ,"scripts/webui_streamlit.py", "--theme.base dark"], stdout=subprocess.DEVNULL)

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

160
data/img2txt/subreddits.txt Normal file
View File

@ -0,0 +1,160 @@
/r/ImaginaryAetherpunk
/r/ImaginaryAgriculture
/r/ImaginaryAirships
/r/ImaginaryAliens
/r/ImaginaryAngels
/r/ImaginaryAnimals
/r/ImaginaryArchers
/r/ImaginaryArchitecture
/r/ImaginaryArmor
/r/ImaginaryArtisans
/r/ImaginaryAssassins
/r/ImaginaryAstronauts
/r/ImaginaryAsylums
/r/ImaginaryAutumnscapes
/r/ImaginaryAviation
/r/ImaginaryAzeroth
/r/ImaginaryBattlefields
/r/ImaginaryBeasts
/r/ImaginaryBehemoths
/r/ImaginaryBodyscapes
/r/ImaginaryBooks
/r/ImaginaryCanyons
/r/ImaginaryCarnage
/r/ImaginaryCastles
/r/ImaginaryCaves
/r/ImaginaryCentaurs
/r/ImaginaryCharacters
/r/ImaginaryCityscapes
/r/ImaginaryClerics
/r/ImaginaryCowboys
/r/ImaginaryCrawlers
/r/ImaginaryCultists
/r/ImaginaryCybernetics
/r/ImaginaryCyberpunk
/r/ImaginaryDarkSouls
/r/ImaginaryDemons
/r/ImaginaryDerelicts
/r/ImaginaryDeserts
/r/ImaginaryDieselpunk
/r/ImaginaryDinosaurs
/r/ImaginaryDragons
/r/ImaginaryDruids
/r/ImaginaryDwarves
/r/ImaginaryDwellings
/r/ImaginaryElementals
/r/ImaginaryElves
/r/ImaginaryExplosions
/r/ImaginaryFactories
/r/ImaginaryFaeries
/r/ImaginaryFallout
/r/ImaginaryFamilies
/r/ImaginaryFashion
/r/ImaginaryFood
/r/ImaginaryForests
/r/ImaginaryFutureWar
/r/ImaginaryFuturism
/r/ImaginaryGardens
/r/ImaginaryGatherings
/r/ImaginaryGiants
/r/ImaginaryGlaciers
/r/ImaginaryGnomes
/r/ImaginaryGoblins
/r/ImaginaryHellscapes
/r/ImaginaryHistory
/r/ImaginaryHorrors
/r/ImaginaryHumans
/r/ImaginaryHybrids
/r/ImaginaryIcons
/r/ImaginaryImmortals
/r/ImaginaryInteriors
/r/ImaginaryIslands
/r/ImaginaryJedi
/r/ImaginaryKanto
/r/ImaginaryKnights
/r/ImaginaryLakes
/r/ImaginaryLandscapes
/r/ImaginaryLesbians
/r/ImaginaryLeviathans
/r/ImaginaryLovers
/r/ImaginaryMarvel
/r/ImaginaryMeIRL
/r/ImaginaryMechs
/r/ImaginaryMen
/r/ImaginaryMerchants
/r/ImaginaryMerfolk
/r/ImaginaryMiddleEarth
/r/ImaginaryMindscapes
/r/ImaginaryMonsterBoys
/r/ImaginaryMonsterGirls
/r/ImaginaryMonsters
/r/ImaginaryMonuments
/r/ImaginaryMountains
/r/ImaginaryMovies
/r/ImaginaryMythology
/r/ImaginaryNatives
/r/ImaginaryNecronomicon
/r/ImaginaryNightscapes
/r/ImaginaryNinjas
/r/ImaginaryNobles
/r/ImaginaryNomads
/r/ImaginaryOrcs
/r/ImaginaryPathways
/r/ImaginaryPirates
/r/ImaginaryPolice
/r/ImaginaryPolitics
/r/ImaginaryPortals
/r/ImaginaryPrisons
/r/ImaginaryPropaganda
/r/ImaginaryRivers
/r/ImaginaryRobotics
/r/ImaginaryRuins
/r/ImaginaryScholars
/r/ImaginaryScience
/r/ImaginarySeascapes
/r/ImaginarySkyscapes
/r/ImaginarySlavery
/r/ImaginarySoldiers
/r/ImaginarySpirits
/r/ImaginarySports
/r/ImaginarySpringscapes
/r/ImaginaryStarscapes
/r/ImaginaryStarships
/r/ImaginaryStatues
/r/ImaginarySteampunk
/r/ImaginarySummerscapes
/r/ImaginarySwamps
/r/ImaginaryTamriel
/r/ImaginaryTaverns
/r/ImaginaryTechnology
/r/ImaginaryTemples
/r/ImaginaryTowers
/r/ImaginaryTrees
/r/ImaginaryTrolls
/r/ImaginaryUndead
/r/ImaginaryUnicorns
/r/ImaginaryVampires
/r/ImaginaryVehicles
/r/ImaginaryVessels
/r/ImaginaryVikings
/r/ImaginaryVillages
/r/ImaginaryVolcanoes
/r/ImaginaryWTF
/r/ImaginaryWalls
/r/ImaginaryWarhammer
/r/ImaginaryWarriors
/r/ImaginaryWarships
/r/ImaginaryWastelands
/r/ImaginaryWaterfalls
/r/ImaginaryWaterscapes
/r/ImaginaryWeaponry
/r/ImaginaryWeather
/r/ImaginaryWerewolves
/r/ImaginaryWesteros
/r/ImaginaryWildlands
/r/ImaginaryWinterscapes
/r/ImaginaryWitcher
/r/ImaginaryWitches
/r/ImaginaryWizards
/r/ImaginaryWorldEaters
/r/ImaginaryWorlds

1936
data/img2txt/tags.txt Normal file

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,63 @@
Fine Art
Diagrammatic
Geometric
Architectural
Analytic
3D
Anamorphic
Pencil
Color Pencil
Charcoal
Graphite
Chalk
Pen
Ink
Crayon
Pastel
Sand
Beach Art
Rangoli
Mehndi
Flower
Food Art
Tattoo
Digital
Pixel
Embroidery
Line
Pointillism
Single Color
Stippling
Contour
Hatching
Scumbling
Scribble
Geometric Portait
Triangulation
Caricature
Photorealism
Photo realistic
Doodling
Wordtoons
Cartoon
Anime
Manga
Graffiti
Typography
Calligraphy
Mosaic
Figurative
Anatomy
Life
Still life
Portrait
Landscape
Perspective
Funny
Surreal
Wall Mural
Street
Realistic
Photo Realistic
Hyper Realistic
Doodle

26
data/tags/config.json Normal file
View File

@ -0,0 +1,26 @@
{
"tagFile": "danbooru.csv",
"maxResults": 5,
"replaceUnderscores": true,
"escapeParentheses": true,
"colors": {
"danbooru": {
"0": ["lightblue", "dodgerblue"],
"1": ["indianred", "firebrick"],
"3": ["violet", "darkorchid"],
"4": ["lightgreen", "darkgreen"],
"5": ["orange", "darkorange"]
},
"e621": {
"-1": ["red", "maroon"],
"0": ["lightblue", "dodgerblue"],
"1": ["gold", "goldenrod"],
"3": ["violet", "darkorchid"],
"4": ["lightgreen", "darkgreen"],
"5": ["tomato", "darksalmon"],
"6": ["red", "maroon"],
"7": ["whitesmoke", "black"],
"8": ["seagreen", "darkseagreen"]
}
}
}

109721
data/tags/danbooru.csv Normal file

File diff suppressed because it is too large Load Diff

66094
data/tags/e621.csv Normal file

File diff suppressed because it is too large Load Diff

36704
data/tags/key_phrases.json Normal file

File diff suppressed because it is too large Load Diff

File diff suppressed because one or more lines are too long

467
db.json Normal file
View File

@ -0,0 +1,467 @@
{
"stable_diffusion": {
"name": "stable_diffusion",
"type": "ckpt",
"description": "Generalist AI image generating model. The baseline for all finetuned models.",
"version": "1.5",
"style": "generalist",
"nsfw": false,
"download_all": true,
"requires": [
"clip-vit-large-patch14"
],
"config": {
"files": [
{
"path": "models/ldm/stable-diffusion-v1/model_1_5.ckpt"
},
{
"path": "configs/stable-diffusion/v1-inference.yaml"
}
],
"download": [
{
"file_name": "model_1_5.ckpt",
"file_path": "models/ldm/stable-diffusion-v1",
"file_url": "https://{username}:{password}@huggingface.co/runwayml/stable-diffusion-v1-5/resolve/main/v1-5-pruned-emaonly.ckpt",
"hf_auth": true
}
]
},
"available": false
},
"stable_diffusion_1.4": {
"name": "stable_diffusion",
"type": "ckpt",
"description": "Generalist AI image generating model. The baseline for all finetuned models.",
"version": "1.4",
"style": "generalist",
"nsfw": false,
"download_all": true,
"requires": [
"clip-vit-large-patch14"
],
"config": {
"files": [
{
"path": "models/ldm/stable-diffusion-v1/model.ckpt",
"md5sum": "c01059060130b8242849d86e97212c84"
},
{
"path": "configs/stable-diffusion/v1-inference.yaml"
}
],
"download": [
{
"file_name": "model.ckpt",
"file_path": "models/ldm/stable-diffusion-v1",
"file_url": "https://www.googleapis.com/storage/v1/b/aai-blog-files/o/sd-v1-4.ckpt?alt=media"
}
],
"alt_download": [
{
"file_name": "model.ckpt",
"file_path": "models/ldm/stable-diffusion-v1",
"file_url": "https://huggingface.co/CompVis/stable-diffusion-v-1-4-original/resolve/main/sd-v1-4.ckpt",
"hf_auth": true
}
]
},
"available": false
},
"waifu_diffusion": {
"name": "waifu_diffusion",
"type": "ckpt",
"description": "Anime styled generations.",
"version": "1.3",
"style": "anime",
"nsfw": false,
"download_all": true,
"requires": [
"clip-vit-large-patch14"
],
"config": {
"files": [
{
"path": "models/custom/waifu-diffusion.ckpt",
"md5sum": "a2aa170e3f513b32a3fd8841656e0123"
},
{
"path": "configs/stable-diffusion/v1-inference.yaml"
}
],
"download": [
{
"file_name": "waifu-diffusion.ckpt",
"file_path": "models/custom",
"file_url": "https://huggingface.co/hakurei/waifu-diffusion-v1-3/resolve/main/wd-v1-3-full.ckpt"
}
]
},
"available": false
},
"Furry Epoch": {
"name": "Furry Epoch",
"type": "ckpt",
"description": "Furry styled generations.",
"version": "4",
"style": "furry",
"nsfw": false,
"download_all": false,
"requires": [
"clip-vit-large-patch14"
],
"config": {
"files": [
{
"path": "models/custom/furry-diffusion.ckpt",
"md5sum": "f8ef45a295ef4966682f6e8fc2c6830d"
},
{
"path": "configs/stable-diffusion/v1-inference.yaml"
}
],
"download": [
{
"file_name": "furry-diffusion.ckpt",
"file_path": "models/custom",
"file_url": "https://sexy.canine.wf/file/furry-ckpt/furry_epoch4.ckpt"
}
]
},
"available": false
},
"Yiffy": {
"name": "Yiffy",
"type": "ckpt",
"description": "Furry styled generations.",
"version": "18",
"style": "furry",
"nsfw": false,
"download_all": true,
"requires": [
"clip-vit-large-patch14"
],
"config": {
"files": [
{
"path": "models/custom/yiffy.ckpt",
"md5sum": "dbe25794e24af183565dc45e9ec99713"
},
{
"path": "configs/stable-diffusion/v1-inference.yaml"
}
],
"download": [
{
"file_name": "yiffy.ckpt",
"file_path": "models/custom",
"file_url": "https://sexy.canine.wf/file/yiffy-ckpt/yiffy-e18.ckpt"
}
]
},
"available": false
},
"Zack3D": {
"name": "Zack3D",
"type": "ckpt",
"description": "Kink/NSFW oriented furry styled generations.",
"version": "1",
"style": "furry",
"nsfw": true,
"download_all": true,
"requires": [
"clip-vit-large-patch14"
],
"config": {
"files": [
{
"path": "models/custom/Zack3D.ckpt",
"md5sum": "aa944b1ecdaac60113027a0fdcda4f1b"
},
{
"path": "configs/stable-diffusion/v1-inference.yaml"
}
],
"download": [
{
"file_name": "Zack3D.ckpt",
"file_path": "models/custom",
"file_url": "https://sexy.canine.wf/file/furry-ckpt/Zack3D_Kinky-v1.ckpt"
}
]
},
"available": false
},
"trinart": {
"name": "trinart",
"type": "ckpt",
"description": "Manga styled generations.",
"version": "1",
"style": "anime",
"nsfw": false,
"download_all": true,
"requires": [
"clip-vit-large-patch14"
],
"config": {
"files": [
{
"path": "models/custom/trinart.ckpt"
},
{
"path": "configs/stable-diffusion/v1-inference.yaml"
}
],
"download": [
{
"file_name": "trinart.ckpt",
"file_path": "models/custom",
"file_url": "https://huggingface.co/naclbit/trinart_stable_diffusion_v2/resolve/main/trinart2_step95000.ckpt"
}
]
},
"available": false
},
"RealESRGAN_x4plus": {
"name": "RealESRGAN_x4plus",
"type": "realesrgan",
"description": "Upscaler.",
"version": "0.1.0",
"style": "generalist",
"config": {
"files": [
{
"path": "models/realesrgan/RealESRGAN_x4plus.pth"
}
],
"download": [
{
"file_name": "RealESRGAN_x4plus.pth",
"file_path": "models/realesrgan",
"file_url": "https://github.com/xinntao/Real-ESRGAN/releases/download/v0.1.0/RealESRGAN_x4plus.pth"
}
]
},
"available": false
},
"RealESRGAN_x4plus_anime_6B": {
"name": "RealESRGAN_x4plus_anime_6B",
"type": "realesrgan",
"description": "Anime focused upscaler.",
"version": "0.2.2.4",
"style": "anime",
"config": {
"files": [
{
"path": "models/realesrgan/RealESRGAN_x4plus_anime_6B.pth"
}
],
"download": [
{
"file_name": "RealESRGAN_x4plus_anime_6B.pth",
"file_path": "models/realesrgan",
"file_url": "https://github.com/xinntao/Real-ESRGAN/releases/download/v0.2.2.4/RealESRGAN_x4plus_anime_6B.pth"
}
]
},
"available": false
},
"GFPGAN": {
"name": "GFPGAN",
"type": "gfpgan",
"description": "Face correction.",
"version": "1.4",
"style": "generalist",
"config": {
"files": [
{
"path": "models/gfpgan/GFPGANv1.4.pth"
},
{
"path": "gfpgan/weights/detection_Resnet50_Final.pth"
},
{
"path": "gfpgan/weights/parsing_parsenet.pth"
}
],
"download": [
{
"file_name": "GFPGANv1.4.pth",
"file_path": "models/gfpgan",
"file_url": "https://github.com/TencentARC/GFPGAN/releases/download/v1.3.4/GFPGANv1.4.pth"
},
{
"file_name": "detection_Resnet50_Final.pth",
"file_path": "./gfpgan/weights",
"file_url": "https://github.com/xinntao/facexlib/releases/download/v0.1.0/detection_Resnet50_Final.pth"
},
{
"file_name": "parsing_parsenet.pth",
"file_path": "./gfpgan/weights",
"file_url": "https://github.com/xinntao/facexlib/releases/download/v0.2.2/parsing_parsenet.pth"
}
]
},
"available": false
},
"LDSR": {
"name": "LDSR",
"type": "ckpt",
"description": "Upscaler.",
"version": "1",
"style": "generalist",
"nsfw": false,
"download_all": true,
"config": {
"files": [
{
"path": "models/ldsr/model.ckpt"
},
{
"path": "models/ldsr/project.yaml"
}
],
"download": [
{
"file_name": "model.ckpt",
"file_path": "models/ldsr",
"file_url": "https://heibox.uni-heidelberg.de/f/578df07c8fc04ffbadf3/?dl=1"
},
{
"file_name": "project.yaml",
"file_path": "models/ldsr",
"file_url": "https://heibox.uni-heidelberg.de/f/31a76b13ea27482981b4/?dl=1"
}
]
},
"available": false
},
"BLIP": {
"name": "BLIP",
"type": "blip",
"config": {
"files": [
{
"path": "models/blip/model__base_caption.pth"
}
],
"download": [
{
"file_name": "model__base_caption.pth",
"file_path": "models/blip",
"file_url": "https://storage.googleapis.com/sfr-vision-language-research/BLIP/models/model*_base_caption.pth"
}
]
},
"available": false
},
"ViT-L/14": {
"name": "ViT-L/14",
"type": "clip",
"config": {
"files": [
{
"path": "models/clip/ViT-L-14.pt"
}
],
"download": [
{
"file_name": "ViT-L-14.pt",
"file_path": "./models/clip",
"file_url": "https://openaipublic.azureedge.net/clip/models/b8cca3fd41ae0c99ba7e8951adf17d267cdb84cd88be6f7c2e0eca1737a03836/ViT-L-14.pt"
}
]
},
"available": false
},
"ViT-g-14": {
"name": "ViT-g-14",
"pretrained_name": "laion2b_s12b_b42k",
"type": "open_clip",
"config": {
"files": [
{
"path": "models/clip/models--laion--CLIP-ViT-g-14-laion2B-s12B-b42K/"
}
],
"download": [
{
"file_name": "main",
"file_path": "./models/clip/models--laion--CLIP-ViT-g-14-laion2B-s12B-b42K/refs",
"file_content": "b36bdd32483debcf4ed2f918bdae1d4a46ee44b8"
},
{
"file_name": "6aac683f899159946bc4ca15228bb7016f3cbb1a2c51f365cba0b23923f344da",
"file_path": "./models/clip/models--laion--CLIP-ViT-g-14-laion2B-s12B-b42K/blobs",
"file_url": "https://huggingface.co/laion/CLIP-ViT-g-14-laion2B-s12B-b42K/resolve/main/open_clip_pytorch_model.bin"
},
{
"file_name": "open_clip_pytorch_model.bin",
"file_path": "./models/clip/models--laion--CLIP-ViT-g-14-laion2B-s12B-b42K/snapshots/b36bdd32483debcf4ed2f918bdae1d4a46ee44b8",
"symlink": "./models/clip/models--laion--CLIP-ViT-g-14-laion2B-s12B-b42K/blobs/6aac683f899159946bc4ca15228bb7016f3cbb1a2c51f365cba0b23923f344da"
}
]
},
"available": false
},
"ViT-H-14": {
"name": "ViT-H-14",
"pretrained_name": "laion2b_s32b_b79k",
"type": "open_clip",
"config": {
"files": [
{
"path": "models/clip/models--laion--CLIP-ViT-H-14-laion2B-s32B-b79K/"
}
],
"download": [
{
"file_name": "main",
"file_path": "./models/clip/models--laion--CLIP-ViT-H-14-laion2B-s32B-b79K/refs",
"file_content": "58a1e03a7acfacbe6b95ebc24ae0394eda6a14fc"
},
{
"file_name": "9a78ef8e8c73fd0df621682e7a8e8eb36c6916cb3c16b291a082ecd52ab79cc4",
"file_path": "./models/clip/models--laion--CLIP-ViT-H-14-laion2B-s32B-b79K/blobs",
"file_url": "https://huggingface.co/laion/CLIP-ViT-H-14-laion2B-s32B-b79K/resolve/main/open_clip_pytorch_model.bin"
},
{
"file_name": "open_clip_pytorch_model.bin",
"file_path": "./models/clip/models--laion--CLIP-ViT-H-14-laion2B-s32B-b79K/snapshots/58a1e03a7acfacbe6b95ebc24ae0394eda6a14fc",
"symlink": "./models/clip/models--laion--CLIP-ViT-H-14-laion2B-s32B-b79K/blobs/9a78ef8e8c73fd0df621682e7a8e8eb36c6916cb3c16b291a082ecd52ab79cc4"
}
]
},
"available": false
},
"diffusers_stable_diffusion": {
"name": "diffusers_stable_diffusion",
"type": "diffusers",
"requires": [
"clip-vit-large-patch14"
],
"config": {
"files": [
{
"path": "models/diffusers/"
}
],
"download": [
{
"file_name": "diffusers_stable_diffusion",
"file_url": "https://{username}:{password}@huggingface.co/CompVis/stable-diffusion-v1-4.git",
"git": true,
"hf_auth": true,
"post_process": [
{
"delete": "models/diffusers/stable-diffusion-v1-4/.git"
}
]
}
]
},
"available": false
}
}

98
db_dep.json Normal file
View File

@ -0,0 +1,98 @@
{
"sd-concepts-library": {
"type": "dependency",
"optional": true,
"config": {
"files": [
{
"path": "models/custom/sd-concepts-library/"
}
],
"download": [
{
"file_name": "sd-concepts-library",
"file_path": "./models/custom/sd-concepts-library/",
"file_url": "https://github.com/sd-webui/sd-concepts-library/archive/refs/heads/main.zip",
"unzip": true,
"move_subfolder": "sd-concepts-library"
}
]
},
"available": false
},
"clip-vit-large-patch14": {
"type": "dependency",
"optional": false,
"config": {
"files": [
{
"path": "models/clip-vit-large-patch14/config.json"
},
{
"path": "models/clip-vit-large-patch14/merges.txt"
},
{
"path": "models/clip-vit-large-patch14/preprocessor_config.json"
},
{
"path": "models/clip-vit-large-patch14/pytorch_model.bin"
},
{
"path": "models/clip-vit-large-patch14/special_tokens_map.json"
},
{
"path": "models/clip-vit-large-patch14/tokenizer.json"
},
{
"path": "models/clip-vit-large-patch14/tokenizer_config.json"
},
{
"path": "models/clip-vit-large-patch14/vocab.json"
}
],
"download": [
{
"file_name": "config.json",
"file_path": "models/clip-vit-large-patch14",
"file_url": "https://huggingface.co/openai/clip-vit-large-patch14/resolve/main/config.json"
},
{
"file_name": "merges.txt",
"file_path": "models/clip-vit-large-patch14",
"file_url": "https://huggingface.co/openai/clip-vit-large-patch14/resolve/main/merges.txt"
},
{
"file_name": "preprocessor_config.json",
"file_path": "models/clip-vit-large-patch14",
"file_url": "https://huggingface.co/openai/clip-vit-large-patch14/resolve/main/preprocessor_config.json"
},
{
"file_name": "pytorch_model.bin",
"file_path": "models/clip-vit-large-patch14",
"file_url": "https://huggingface.co/openai/clip-vit-large-patch14/resolve/main/pytorch_model.bin"
},
{
"file_name": "special_tokens_map.json",
"file_path": "models/clip-vit-large-patch14",
"file_url": "https://huggingface.co/openai/clip-vit-large-patch14/resolve/main/special_tokens_map.json"
},
{
"file_name": "tokenizer.json",
"file_path": "models/clip-vit-large-patch14",
"file_url": "https://huggingface.co/openai/clip-vit-large-patch14/resolve/main/tokenizer.json"
},
{
"file_name": "tokenizer_config.json",
"file_path": "models/clip-vit-large-patch14",
"file_url": "https://huggingface.co/openai/clip-vit-large-patch14/resolve/main/tokenizer_config.json"
},
{
"file_name": "vocab.json",
"file_path": "models/clip-vit-large-patch14",
"file_url": "https://huggingface.co/openai/clip-vit-large-patch14/resolve/main/vocab.json"
}
]
},
"available": false
}
}

View File

@ -1,10 +1,12 @@
---
title: Windows Installation
---
<!--
This file is part of stable-diffusion-webui (https://github.com/sd-webui/stable-diffusion-webui/).
Copyright 2022 sd-webui team.
<!--
This file is part of sygil-webui (https://github.com/Sygil-Dev/sygil-webui/).
Copyright 2022 Sygil-Dev team.
This program is free software: you can redistribute it and/or modify
it under the terms of the GNU Affero General Public License as published by
the Free Software Foundation, either version 3 of the License, or
@ -19,7 +21,8 @@ You should have received a copy of the GNU Affero General Public License
along with this program. If not, see <http://www.gnu.org/licenses/>.
-->
# Initial Setup
# Initial Setup
> This is a windows guide. [To install on Linux, see this page.](2.linux-installation.md)
## Pre requisites
@ -29,62 +32,56 @@ along with this program. If not, see <http://www.gnu.org/licenses/>.
* https://gitforwindows.org/ Download this, and accept all of the default settings it offers except for the default editor selection. Once it asks for what the default editor is, most people who are unfamiliar with this should just choose Notepad because everyone has Notepad on Windows.
![CleanShot 2022-08-31 at 16 29 48@2x](https://user-images.githubusercontent.com/463317/187796320-e6edbb39-dff1-46a2-a1a1-c4c1875d414c.jpg)
* Download Miniconda3:
[https://repo.anaconda.com/miniconda/Miniconda3-latest-Windows-x86_64.exe](https://repo.anaconda.com/miniconda/Miniconda3-latest-Windows-x86_64.exe) Get this installed so that you have access to the Miniconda3 Prompt Console.
[https://repo.anaconda.com/miniconda/Miniconda3-latest-Windows-x86_64.exe](https://repo.anaconda.com/miniconda/Miniconda3-latest-Windows-x86_64.exe) Get this installed so that you have access to the Miniconda3 Prompt Console.
* Open Miniconda3 Prompt from your start menu after it has been installed
* _(Optional)_ Create a new text file in your root directory `/stable-diffusion-webui/custom-conda-path.txt` that contains the path to your relevant Miniconda3, for example `C:\Users\<username>\miniconda3` (replace `<username>` with your own username). This is required if you have more than 1 miniconda installation or are using custom installation location.
* _(Optional)_ Create a new text file in your root directory `/sygil-webui/custom-conda-path.txt` that contains the path to your relevant Miniconda3, for example `C:\Users\<username>\miniconda3` (replace `<username>` with your own username). This is required if you have more than 1 miniconda installation or are using custom installation location.
## Cloning the repo
Type `git clone https://github.com/sd-webui/stable-diffusion-webui.git` into the prompt.
Type `git clone https://github.com/Sygil-Dev/sygil-webui.git` into the prompt.
This will create the `stable-diffusion-webui` directory in your Windows user folder.
This will create the `sygil-webui` directory in your Windows user folder.
![CleanShot 2022-08-31 at 16 31 20@2x](https://user-images.githubusercontent.com/463317/187796462-29e5bafd-bbc1-4a48-adc8-7eccc174cb62.jpg)
---
Once a repo has been cloned, updating it is as easy as typing `git pull` inside of Miniconda when in the repos topmost directory downloaded by the clone command. Below you can see I used the `cd` command to navigate into that folder.
![CleanShot 2022-08-31 at 16 36 34@2x](https://user-images.githubusercontent.com/463317/187796970-db94402f-717b-43a8-9c85-270c0cd256c3.jpg)
![CleanShot 2022-08-31 at 16 36 34@2x](https://user-images.githubusercontent.com/463317/187796970-db94402f-717b-43a8-9c85-270c0cd256c3.jpg)
* Next you are going to want to create a Hugging Face account: [https://huggingface.co/](https://huggingface.co/)
* After you have signed up, and are signed in go to this link and click on Authorize: [https://huggingface.co/CompVis/stable-diffusion-v-1-4-original](https://huggingface.co/CompVis/stable-diffusion-v-1-4-original)
* After you have authorized your account, go to this link to download the model weights for version 1.4 of the model, future versions will be released in the same way, and updating them will be a similar process :
[https://huggingface.co/CompVis/stable-diffusion-v-1-4-original/resolve/main/sd-v1-4.ckpt](https://huggingface.co/CompVis/stable-diffusion-v-1-4-original/resolve/main/sd-v1-4.ckpt)
* Download the model into this directory: `C:\Users\<username>\stable-diffusion-webui\models\ldm\stable-diffusion-v1`
[https://huggingface.co/CompVis/stable-diffusion-v-1-4-original/resolve/main/sd-v1-4.ckpt](https://huggingface.co/CompVis/stable-diffusion-v-1-4-original/resolve/main/sd-v1-4.ckpt)
* Download the model into this directory: `C:\Users\<username>\sygil-webui\models\ldm\stable-diffusion-v1`
* Rename `sd-v1-4.ckpt` to `model.ckpt` once it is inside the stable-diffusion-v1 folder.
* Since we are already in our stable-diffusion-webui folder in Miniconda, our next step is to create the environment Stable Diffusion needs to work.
* Since we are already in our sygil-webui folder in Miniconda, our next step is to create the environment Stable Diffusion needs to work.
* _(Optional)_ If you already have an environment set up for an installation of Stable Diffusion named ldm open up the `environment.yaml` file in `\stable-diffusion-webui\` change the environment name inside of it from `ldm` to `ldo`
* _(Optional)_ If you already have an environment set up for an installation of Stable Diffusion named ldm open up the `environment.yaml` file in `\sygil-webui\` change the environment name inside of it from `ldm` to `ldo`
---
## First run
* `webui.cmd` at the root folder (`\stable-diffusion-webui\`) is your main script that you'll always run. It has the functions to automatically do the followings:
* Create conda env
* Install and update requirements
* Run the relauncher and webui.py script for gradio UI options
* `webui.cmd` at the root folder (`\sygil-webui\`) is your main script that you'll always run. It has the functions to automatically do the followings:
* Create conda env
* Install and update requirements
* Run the relauncher and webui.py script for gradio UI options
* Run `webui.cmd` by double clicking the file.
* Wait for it to process, this could take some time. Eventually itll look like this:
![First successful run](https://user-images.githubusercontent.com/3688500/189009827-66c5df32-be44-4851-a265-6791444f537f.JPG)
* You'll receive warning messages on **GFPGAN**, **RealESRGAN** and **LDSR** but these are optionals and will be further explained below.
@ -95,34 +92,36 @@ Once a repo has been cloned, updating it is as easy as typing `git pull` inside
* You should be able to see progress in your `webui.cmd` window. The [http://localhost:7860/](http://localhost:7860/) will be automatically updated to show the final image once progress reach 100%
* Images created with the web interface will be saved to `\stable-diffusion-webui\outputs\` in their respective folders alongside `.yaml` text files with all of the details of your prompts for easy referencing later. Images will also be saved with their seed and numbered so that they can be cross referenced with their `.yaml` files easily.
* Images created with the web interface will be saved to `\sygil-webui\outputs\` in their respective folders alongside `.yaml` text files with all of the details of your prompts for easy referencing later. Images will also be saved with their seed and numbered so that they can be cross referenced with their `.yaml` files easily.
---
### Optional additional models
### Optional additional models
There are three more models that we need to download in order to get the most out of the functionality offered by sd-webui.
There are three more models that we need to download in order to get the most out of the functionality offered by Sygil-Dev.
> The models are placed inside `src` folder. If you don't have `src` folder inside your root directory it means that you haven't installed the dependencies for your environment yet. [Follow this step](#first-run) before proceeding.
### GFPGAN
1. If you want to use GFPGAN to improve generated faces, you need to install it separately.
1. Download [GFPGANv1.3.pth](https://github.com/TencentARC/GFPGAN/releases/download/v1.3.0/GFPGANv1.3.pth) and [GFPGANv1.4.pth](https://github.com/TencentARC/GFPGAN/releases/download/v1.3.4/GFPGANv1.4.pth) and put it
into the `/stable-diffusion-webui/models/gfpgan` directory.
2. Download [GFPGANv1.3.pth](https://github.com/TencentARC/GFPGAN/releases/download/v1.3.0/GFPGANv1.3.pth) and [GFPGANv1.4.pth](https://github.com/TencentARC/GFPGAN/releases/download/v1.3.4/GFPGANv1.4.pth) and put it into the `/sygil-webui/models/gfpgan` directory.
### RealESRGAN
1. Download [RealESRGAN_x4plus.pth](https://github.com/xinntao/Real-ESRGAN/releases/download/v0.1.0/RealESRGAN_x4plus.pth) and [RealESRGAN_x4plus_anime_6B.pth](https://github.com/xinntao/Real-ESRGAN/releases/download/v0.2.2.4/RealESRGAN_x4plus_anime_6B.pth).
1. Put them into the `stable-diffusion-webui/models/realesrgan` directory.
2. Put them into the `sygil-webui/models/realesrgan` directory.
### LDSR
1. Detailed instructions [here](https://github.com/Hafiidz/latent-diffusion). Brief instruction as follows.
1. Git clone [Hafiidz/latent-diffusion](https://github.com/Hafiidz/latent-diffusion) into your `/stable-diffusion-webui/src/` folder.
1. Run `/stable-diffusion-webui/models/ldsr/download_model.bat` to automatically download and rename the models.
1. Wait until it is done and you can confirm by confirming two new files in `stable-diffusion-webui/models/ldsr/`
1. _(Optional)_ If there are no files there, you can manually download **LDSR** [project.yaml](https://heibox.uni-heidelberg.de/f/31a76b13ea27482981b4/?dl=1) and [model last.cpkt](https://heibox.uni-heidelberg.de/f/578df07c8fc04ffbadf3/?dl=1).
1. Rename last.ckpt to model.ckpt and place both under `stable-diffusion-webui/models/ldsr/`.
1. Refer to [here](https://github.com/sd-webui/stable-diffusion-webui/issues/488) for any issue.
1. Detailed instructions [here](https://github.com/Hafiidz/latent-diffusion). Brief instruction as follows.
2. Git clone [Hafiidz/latent-diffusion](https://github.com/Hafiidz/latent-diffusion) into your `/sygil-webui/src/` folder.
3. Run `/sygil-webui/models/ldsr/download_model.bat` to automatically download and rename the models.
4. Wait until it is done and you can confirm by confirming two new files in `sygil-webui/models/ldsr/`
5. _(Optional)_ If there are no files there, you can manually download **LDSR** [project.yaml](https://heibox.uni-heidelberg.de/f/31a76b13ea27482981b4/?dl=1) and [model last.cpkt](https://heibox.uni-heidelberg.de/f/578df07c8fc04ffbadf3/?dl=1).
6. Rename last.ckpt to model.ckpt and place both under `sygil-webui/models/ldsr/`.
7. Refer to [here](https://github.com/Sygil-Dev/sygil-webui/issues/488) for any issue.
# Credits
> Modified by [Hafiidz](https://github.com/Hafiidz) with helps from sd-webui discord and team.
> Modified by [Hafiidz](https://github.com/Hafiidz) with helps from Sygil-Dev discord and team.

View File

@ -2,9 +2,9 @@
title: Linux Installation
---
<!--
This file is part of stable-diffusion-webui (https://github.com/sd-webui/stable-diffusion-webui/).
This file is part of sygil-webui (https://github.com/Sygil-Dev/sygil-webui/).
Copyright 2022 sd-webui team.
Copyright 2022 Sygil-Dev team.
This program is free software: you can redistribute it and/or modify
it under the terms of the GNU Affero General Public License as published by
the Free Software Foundation, either version 3 of the License, or
@ -42,9 +42,9 @@ along with this program. If not, see <http://www.gnu.org/licenses/>.
**Step 3:** Make the script executable by opening the directory in your Terminal and typing `chmod +x linux-sd.sh`, or whatever you named this file as.
**Step 4:** Run the script with `./linux-sd.sh`, it will begin by cloning the [WebUI Github Repo](https://github.com/sd-webui/stable-diffusion-webui) to the directory the script is located in. This folder will be named `stable-diffusion-webui`.
**Step 4:** Run the script with `./linux-sd.sh`, it will begin by cloning the [WebUI Github Repo](https://github.com/Sygil-Dev/sygil-webui) to the directory the script is located in. This folder will be named `sygil-webui`.
**Step 5:** The script will pause and ask that you move/copy the downloaded 1.4 AI models to the `stable-diffusion-webui` folder. Press Enter once you have done so to continue.
**Step 5:** The script will pause and ask that you move/copy the downloaded 1.4 AI models to the `sygil-webui` folder. Press Enter once you have done so to continue.
**If you are running low on storage space, you can just move the 1.4 AI models file directly to this directory, it will not be deleted, simply moved and renamed. However my personal suggestion is to just **copy** it to the repo folder, in case you desire to delete and rebuild your Stable Diffusion build again.**
@ -76,7 +76,7 @@ The user will have the ability to set these to yes or no using the menu choices.
- Uses An Older Interface Style
- Will Not Receive Major Updates
**Step 9:** If everything has gone successfully, either a new browser window will open with the Streamlit version, or you should see `Running on local URL: http://localhost:7860/` in your Terminal if you launched the Gradio Interface version. Generated images will be located in the `outputs` directory inside of `stable-diffusion-webui`. Enjoy the definitive Stable Diffusion WebUI experience on Linux! :)
**Step 9:** If everything has gone successfully, either a new browser window will open with the Streamlit version, or you should see `Running on local URL: http://localhost:7860/` in your Terminal if you launched the Gradio Interface version. Generated images will be located in the `outputs` directory inside of `sygil-webui`. Enjoy the definitive Stable Diffusion WebUI experience on Linux! :)
## Ultimate Stable Diffusion Customizations
@ -87,7 +87,7 @@ If the user chooses to Customize their setup, then they will be presented with t
- Update the Stable Diffusion WebUI fork from the GitHub Repo
- Customize the launch arguments for Gradio Interface version of Stable Diffusion (See Above)
### Refer back to the original [WebUI Github Repo](https://github.com/sd-webui/stable-diffusion-webui) for useful tips and links to other resources that can improve your Stable Diffusion experience
### Refer back to the original [WebUI Github Repo](https://github.com/Sygil-Dev/sygil-webui) for useful tips and links to other resources that can improve your Stable Diffusion experience
## Planned Additions
- Investigate ways to handle Anaconda automatic installation on a user's system.

View File

@ -2,7 +2,7 @@
title: Running Stable Diffusion WebUI Using Docker
---
<!--
This file is part of stable-diffusion-webui (https://github.com/sd-webui/stable-diffusion-webui/).
This file is part of sygil-webui (https://github.com/Sygil-Dev/sygil-webui/).
Copyright 2022 sd-webui team.
This program is free software: you can redistribute it and/or modify
@ -19,6 +19,34 @@ You should have received a copy of the GNU Affero General Public License
along with this program. If not, see <http://www.gnu.org/licenses/>.
-->
## Running prebuilt image
The easiest way to run Stable Diffusion WebUI is to use the prebuilt image from Docker Hub.
```bash
docker pull hlky/sd-webui:runpod
```
This image has all the necessary models baked in. It is quite large but streamlines the process of managing the various models and simplifies the user experience.
Alternatively you can pull:
```bash
docker pull hlky/sd-webui:latest
```
This image includes the babrebones environment to run the Web UI. The models will be downloaded during the installation process. You will have to take care of the volume for the `sd/models` directory.
It is recommended that you run the `runpod` version.
You can run the image using the following command:
```bash
docker container run --rm -d -p 8501:8501 -e STREAMLIT_SERVER_HEADLESS=true -e "WEBUI_SCRIPT=webui_streamlit.py" -e "VALIDATE_MODELS=false" -v "${PWD}/outputs:/sd/outputs" --gpus all hlky/sd-webui:runpod
```
> Note: if you are running it on runpod it only supports one volume mount which is used for your outputs.
> Note: if you are running it on your local machine the output directory will be created in the current directory from where you run this command.
## Building the image
This Docker environment is intended to speed up development and testing of Stable Diffusion WebUI features. Use of a container image format allows for packaging and isolation of Stable Diffusion / WebUI's dependencies separate from the Host environment.
You can use this Dockerfile to build a Docker image and run Stable Diffusion WebUI locally.
@ -41,7 +69,7 @@ Additional Requirements:
Other Notes:
* "Optional" packages commonly used with Stable Diffusion WebUI workflows such as, RealESRGAN, GFPGAN, will be installed by default.
* An older version of running Stable Diffusion WebUI using Docker exists here: https://github.com/sd-webui/stable-diffusion-webui/discussions/922
* An older version of running Stable Diffusion WebUI using Docker exists here: https://github.com/Sygil-Dev/sygil-webui/discussions/922
### But what about AMD?
There is tentative support for AMD GPUs through docker which can be enabled via `docker-compose.amd.yml`,
@ -63,7 +91,7 @@ in your `.profile` or through a tool like `direnv`
### Clone Repository
* Clone this repository to your host machine:
* `git clone https://github.com/sd-webui/stable-diffusion-webui.git`
* `git clone https://github.com/Sygil-Dev/sygil-webui.git`
* If you plan to use Docker Compose to run the image in a container (most users), create an `.env_docker` file using the example file:
* `cp .env_docker.example .env_docker`
* Edit `.env_docker` using the text editor of your choice.
@ -77,7 +105,7 @@ The default `docker-compose.yml` file will create a Docker container instance n
* Create an instance of the Stable Diffusion WebUI image as a Docker container:
* `docker compose up`
* During the first run, the container image will be build containing all of the dependencies necessary to run Stable Diffusion. This build process will take several minutes to complete
* After the image build has completed, you will have a docker image for running the Stable Diffusion WebUI tagged `stable-diffusion-webui:dev`
* After the image build has completed, you will have a docker image for running the Stable Diffusion WebUI tagged `sygil-webui:dev`
(Optional) Daemon mode:
* You can start the container in "daemon" mode by applying the `-d` option: `docker compose up -d`. This will run the server in the background so you can close your console window without losing your work.
@ -132,9 +160,9 @@ You will need to re-download all associated model files/weights used by Stable D
* `docker exec -it st-webui /bin/bash`
* `docker compose exec stable-diffusion bash`
* To start a container using the Stable Diffusion WebUI Docker image without Docker Compose, you can do so with the following command:
* `docker run --rm -it --entrypoint /bin/bash stable-diffusion-webui:dev`
* `docker run --rm -it --entrypoint /bin/bash sygil-webui:dev`
* To start a container, with mapped ports, GPU resource access, and a local directory bound as a container volume, you can do so with the following command:
* `docker run --rm -it -p 8501:8501 -p 7860:7860 --gpus all -v $(pwd):/sd --entrypoint /bin/bash stable-diffusion-webui:dev`
* `docker run --rm -it -p 8501:8501 -p 7860:7860 --gpus all -v $(pwd):/sd --entrypoint /bin/bash sygil-webui:dev`
---

View File

@ -2,9 +2,9 @@
title: Streamlit Web UI Interface
---
<!--
This file is part of stable-diffusion-webui (https://github.com/sd-webui/stable-diffusion-webui/).
This file is part of sygil-webui (https://github.com/Sygil-Dev/sygil-webui/).
Copyright 2022 sd-webui team.
Copyright 2022 Sygil-Dev team.
This program is free software: you can redistribute it and/or modify
it under the terms of the GNU Affero General Public License as published by
the Free Software Foundation, either version 3 of the License, or
@ -94,7 +94,7 @@ Streamlit Image2Image allows for you to take an image, be it generated by Stable
The Concept Library allows for the easy usage of custom textual inversion models. These models may be loaded into `models/custom/sd-concepts-library` and will appear in the Concepts Library in Streamlit. To use one of these custom models in a prompt, either copy it using the button on the model, or type `<model-name>` in the prompt where you wish to use it.
Please see the [Concepts Library](https://github.com/sd-webui/stable-diffusion-webui/blob/master/docs/7.concepts-library.md) section to learn more about how to use these tools.
Please see the [Concepts Library](https://github.com/Sygil-Dev/sygil-webui/blob/master/docs/7.concepts-library.md) section to learn more about how to use these tools.
## Textual Inversion
---

92
docs/44.competition.md Normal file
View File

@ -0,0 +1,92 @@
# Textual inversion usage competition
We are hosting a competition where the community can showcase their most inventive use of textual inversion concepts in text-to-image or text-to-video.
Our compute cluster; `Nataili`, currently comprises of 3 nodes, two have 3090, the other has 2 x A5000.
We estimate `Nataili` can handle 12 concepts per hour, and we can add more workers if there is high demand.
Hopefully demand will be high, we want to train **hundreds** of new concepts!
# Schedule
2022/10/20 - Stage 1 begins, train concept command opened for usage
2022/10/22 12AM UTC - Stage 2 begins, text to image command opened for usage
2022/10/22 12PM UTC - Stage 1 ends, train concept command closed
2022/10/24 12PM UTC - Stage 2 ends, no more entries will be accepted
2022/10/24 6-12PM UTC - Winners announced
# What does `most inventive use` mean?
Whatever you want it to mean! be creative! experiment!
There are several categories we will look at:
* anything that's particularly creative, ~ artistic ~ or a e s t h e t i c
![20221019203426_00000](https://user-images.githubusercontent.com/106811348/197045193-d6f9c56b-9989-4f1c-b42a-bb02d62d77cd.png)
* composition; meaning anything related to how big things are, their position, the angle, etc
* styling;
![image](https://user-images.githubusercontent.com/106811348/197045629-029ba6f5-1f79-475c-9ce7-969aaf3d253b.png)
* `The Sims(TM): Stable Diffusion edition`
## So I can trai-
* Yes, as long as it's sfw
## `The Sims(TM): Stable Diffusion edition` ?
For this event the theme is “The Sims: Stable Diffusion edition”.
So we have selected a subset of [products from Amazon Berkely Objects dataset](https://github.com/sd-webui/abo).
Any other object is welcome too these are just a good source of data for this part of the competition.
Each product has images from multiple angles, the train concept command accepts up to 10 images, so choose the angles and modify backgrounds, experiment!
The goal with this category is to generate an image using the trained object, and the other categories apply, your imagination is the only limit! style a couch, try to make a BIG couch, try to make a couch on top of a mountain, try to make a vaporwave couch, anything!
# How do I train a concept using the discord bot?
Type `/trainconcept` then press tab to go through the fields
`Concept name` is just a name for your concept, it doesn't have to be a single word
`Placeholder` is what you will use in prompts to represent your concept
Add `<` and `>` so it is unique, multiple words should be hyphenated
`Initializer` is used as the starting point for training your concept, so this should be a single word that represents your concept
Minimum 2 images. Squareish aspect ratios work best
![Untitled-2](https://user-images.githubusercontent.com/106811348/197035834-cc973e29-31f8-48de-be2d-788fbe938b2e.png)
![image](https://user-images.githubusercontent.com/106811348/197035870-b91ef2a8-0ffd-47e1-a8df-9600df26cd6b.png)
# How do I use the trained concept?
## Prompting with concepts
When your concept is trained you can use it in prompts.
`a cute <nvidiafu> as an astronaut`:
![image](https://user-images.githubusercontent.com/106811348/197037250-044ea241-72a5-4caa-b772-35034245b4b6.png)
or `a green <green-couch> sitting on top of a floor, a 3D render, trending on polycount, minimalism, rendered in cinema4d`:
![image](https://user-images.githubusercontent.com/106811348/197037344-7ce72188-9129-4ba2-8a28-cba5fd664a9c.png)
## Using concepts in the webui
The discord bot will give you a link to a `.zip` file, download this, extract it, and put the folder in `stable-diffusion-webui/models/custom/sd-concepts-library`
![image](https://user-images.githubusercontent.com/106811348/197037892-ce53bea4-d1db-4b25-bb7c-7dfe4d71b2b1.png)

View File

@ -2,9 +2,9 @@
title: Gradio Web UI Interface
---
<!--
This file is part of stable-diffusion-webui (https://github.com/sd-webui/stable-diffusion-webui/).
This file is part of sygil-webui (https://github.com/Sygil-Dev/sygil-webui/).
Copyright 2022 sd-webui team.
Copyright 2022 Sygil-Dev team.
This program is free software: you can redistribute it and/or modify
it under the terms of the GNU Affero General Public License as published by
the Free Software Foundation, either version 3 of the License, or

View File

@ -2,9 +2,9 @@
title: Upscalers
---
<!--
This file is part of stable-diffusion-webui (https://github.com/sd-webui/stable-diffusion-webui/).
This file is part of sygil-webui (https://github.com/Sygil-Dev/sygil-webui/).
Copyright 2022 sd-webui team.
Copyright 2022 Sygil-Dev team.
This program is free software: you can redistribute it and/or modify
it under the terms of the GNU Affero General Public License as published by
the Free Software Foundation, either version 3 of the License, or
@ -32,7 +32,7 @@ GFPGAN is designed to help restore faces in Stable Diffusion outputs. If you hav
If you want to use GFPGAN to improve generated faces, you need to download the models for it seperately if you are on Windows or doing so manually on Linux.
Download [GFPGANv1.3.pth](https://github.com/TencentARC/GFPGAN/releases/download/v1.3.0/GFPGANv1.3.pth) and put it
into the `/stable-diffusion-webui/models/gfpgan` directory after you have setup the conda environment for the first time.
into the `/sygil-webui/models/gfpgan` directory after you have setup the conda environment for the first time.
## RealESRGAN
---
@ -42,7 +42,7 @@ RealESRGAN is a 4x upscaler built into both versions of the Web UI interface. It
If you want to use RealESRGAN to upscale your images, you need to download the models for it seperately if you are on Windows or doing so manually on Linux.
Download [RealESRGAN_x4plus.pth](https://github.com/xinntao/Real-ESRGAN/releases/download/v0.1.0/RealESRGAN_x4plus.pth) and [RealESRGAN_x4plus_anime_6B.pth](https://github.com/xinntao/Real-ESRGAN/releases/download/v0.2.2.4/RealESRGAN_x4plus_anime_6B.pth).
Put them into the `stable-diffusion-webui/models/realesrgan` directory after you have setup the conda environment for the first time.
Put them into the `sygil-webui/models/realesrgan` directory after you have setup the conda environment for the first time.
## GoBig (Gradio only currently)
---
@ -57,7 +57,7 @@ To use GoBig, you will need to download the RealESRGAN models as directed above.
LSDR is a 4X upscaler with high VRAM usage that uses a Latent Diffusion model to upscale the image. This will accentuate the details of an image, but won't change the composition. This might introduce sharpening, but it is great for textures or compositions with plenty of details. However, it is slower and will use more VRAM.
If you want to use LSDR to upscale your images, you need to download the models for it seperately if you are on Windows or doing so manually on Linux.
Download the LDSR [project.yaml](https://heibox.uni-heidelberg.de/f/31a76b13ea27482981b4/?dl=1) and [ model last.cpkt](https://heibox.uni-heidelberg.de/f/578df07c8fc04ffbadf3/?dl=1). Rename `last.ckpt` to `model.ckpt` and place both in the `stable-diffusion-webui/models/ldsr` directory after you have setup the conda environment for the first time.
Download the LDSR [project.yaml](https://heibox.uni-heidelberg.de/f/31a76b13ea27482981b4/?dl=1) and [ model last.cpkt](https://heibox.uni-heidelberg.de/f/578df07c8fc04ffbadf3/?dl=1). Rename `last.ckpt` to `model.ckpt` and place both in the `sygil-webui/models/ldsr` directory after you have setup the conda environment for the first time.
## GoLatent (Gradio only currently)
---

View File

@ -1,7 +1,7 @@
<!--
This file is part of stable-diffusion-webui (https://github.com/sd-webui/stable-diffusion-webui/).
This file is part of sygil-webui (https://github.com/Sygil-Dev/sygil-webui/).
Copyright 2022 sd-webui team.
Copyright 2022 Sygil-Dev team.
This program is free software: you can redistribute it and/or modify
it under the terms of the GNU Affero General Public License as published by
the Free Software Foundation, either version 3 of the License, or

View File

@ -2,9 +2,9 @@
title: Custom models
---
<!--
This file is part of stable-diffusion-webui (https://github.com/sd-webui/stable-diffusion-webui/).
This file is part of sygil-webui (https://github.com/Sygil-Dev/sygil-webui/).
Copyright 2022 sd-webui team.
Copyright 2022 Sygil-Dev team.
This program is free software: you can redistribute it and/or modify
it under the terms of the GNU Affero General Public License as published by
the Free Software Foundation, either version 3 of the License, or

View File

@ -1,7 +1,7 @@
#!/bin/bash
# This file is part of stable-diffusion-webui (https://github.com/sd-webui/stable-diffusion-webui/).
# This file is part of sygil-webui (https://github.com/Sygil-Dev/sygil-webui/).
# Copyright 2022 sd-webui team.
# Copyright 2022 Sygil-Dev team.
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
@ -111,7 +111,7 @@ if [[ -e "${MODEL_DIR}/sd-concepts-library" ]]; then
else
# concept library does not exist, clone
cd ${MODEL_DIR}
git clone https://github.com/sd-webui/sd-concepts-library.git
git clone https://github.com/Sygil-Dev/sd-concepts-library.git
fi
# create directory and link concepts library
mkdir -p ${SCRIPT_DIR}/models/custom

View File

@ -1,7 +1,7 @@
name: ldm
# This file is part of stable-diffusion-webui (https://github.com/sd-webui/stable-diffusion-webui/).
# This file is part of sygil-webui (https://github.com/Sygil-Dev/sygil-webui/).
# Copyright 2022 sd-webui team.
# Copyright 2022 Sygil-Dev team.
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
@ -29,52 +29,5 @@ dependencies:
- scikit-image=0.19.2
- torchvision=0.12.0
- pip:
- -e .
- -e git+https://github.com/CompVis/taming-transformers#egg=taming-transformers
- -e git+https://github.com/openai/CLIP#egg=clip
- -e git+https://github.com/hlky/k-diffusion-sd#egg=k_diffusion
- -e git+https://github.com/devilismyfriend/latent-diffusion#egg=latent-diffusion
- accelerate==0.12.0
- albumentations==0.4.3
- basicsr>=1.3.4.0
- diffusers==0.3.0
- einops==0.3.1
- facexlib>=0.2.3
- ftfy==6.1.1
- fairscale==0.4.4
- gradio==3.1.6
- gfpgan==1.3.8
- hydralit_components==1.0.10
- hydralit==1.0.14
- imageio-ffmpeg==0.4.2
- imageio==2.9.0
- kornia==0.6
- loguru
- omegaconf==2.1.1
- opencv-python-headless==4.6.0.66
- open-clip-torch==2.0.2
- pandas==1.4.3
- piexif==1.1.3
- pudb==2019.2
- pynvml==11.4.1
- python-slugify>=6.1.2
- pytorch-lightning==1.4.2
- retry>=0.9.2
- regex
- realesrgan==0.3.0
- streamlit==1.13.0
- streamlit-on-Hover-tabs==1.0.1
- streamlit-option-menu==0.3.2
- streamlit_nested_layout
- streamlit-server-state==0.14.2
- streamlit-tensorboard==0.0.2
- test-tube>=0.7.5
- tensorboard==2.10.1
- timm==0.6.7
- torch-fidelity==0.3.0
- torchmetrics==0.6.0
- transformers==4.19.2
- tensorflow==2.10.0
- tqdm==4.64.0
- wget
- -r requirements.txt

View File

@ -1,7 +1,7 @@
/*
This file is part of stable-diffusion-webui (https://github.com/sd-webui/stable-diffusion-webui/).
This file is part of sygil-webui (https://github.com/Sygil-Dev/sygil-webui/).
Copyright 2022 sd-webui team.
Copyright 2022 Sygil-Dev team.
This program is free software: you can redistribute it and/or modify
it under the terms of the GNU Affero General Public License as published by
the Free Software Foundation, either version 3 of the License, or

View File

@ -1,7 +1,7 @@
/*
This file is part of stable-diffusion-webui (https://github.com/sd-webui/stable-diffusion-webui/).
This file is part of sygil-webui (https://github.com/Sygil-Dev/sygil-webui/).
Copyright 2022 sd-webui team.
Copyright 2022 Sygil-Dev team.
This program is free software: you can redistribute it and/or modify
it under the terms of the GNU Affero General Public License as published by
the Free Software Foundation, either version 3 of the License, or
@ -26,10 +26,11 @@ button[data-baseweb="tab"] {
}
/* Image Container (only appear after run finished)//center the image, especially better looks in wide screen */
.css-du1fp8 {
justify-content: center;
.css-1kyxreq{
justify-content: center;
}
/* Streamlit header */
.css-1avcm0n {
background-color: transparent;
@ -135,6 +136,7 @@ div.gallery:hover {
/********************************************************************
Hide anchor links on titles
*********************************************************************/
/*
.css-15zrgzn {
display: none
}
@ -143,4 +145,34 @@ div.gallery:hover {
}
.css-jn99sy {
display: none
}
}
/* Make the text area widget have a similar height as the text input field */
.st-dy{
height: 54px;
min-height: 25px;
}
.css-17useex{
gap: 3px;
}
/* Remove some empty spaces to make the UI more compact. */
.css-18e3th9{
padding-left: 10px;
padding-right: 30px;
position: unset !important; /* Fixes the layout/page going up when an expander or another item is expanded and then collapsed */
}
.css-k1vhr4{
padding-top: initial;
}
.css-ret2ud{
padding-left: 10px;
padding-right: 30px;
gap: initial;
display: initial;
}
.css-w5z5an{
gap: 1px;
}

View File

@ -1,7 +1,7 @@
/*
This file is part of stable-diffusion-webui (https://github.com/sd-webui/stable-diffusion-webui/).
This file is part of sygil-webui (https://github.com/Sygil-Dev/sygil-webui/).
Copyright 2022 sd-webui team.
Copyright 2022 Sygil-Dev team.
This program is free software: you can redistribute it and/or modify
it under the terms of the GNU Affero General Public License as published by
the Free Software Foundation, either version 3 of the License, or
@ -88,3 +88,11 @@ input[type=number]:disabled { -moz-appearance: textfield; }
/* fix buttons layouts */
}
/* Gradio 3.4 FIXES */
#prompt_row button {
max-width: 20ch;
}
#text2img_col2 {
flex-grow: 2 !important;
}

View File

@ -1,6 +1,6 @@
# This file is part of stable-diffusion-webui (https://github.com/sd-webui/stable-diffusion-webui/).
# This file is part of sygil-webui (https://github.com/Sygil-Dev/sygil-webui/).
# Copyright 2022 sd-webui team.
# Copyright 2022 Sygil-Dev team.
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as published by
# the Free Software Foundation, either version 3 of the License, or

View File

@ -1,6 +1,6 @@
# This file is part of stable-diffusion-webui (https://github.com/sd-webui/stable-diffusion-webui/).
# This file is part of sygil-webui (https://github.com/Sygil-Dev/sygil-webui/).
# Copyright 2022 sd-webui team.
# Copyright 2022 Sygil-Dev team.
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
@ -65,7 +65,7 @@ def draw_gradio_ui(opt, img2img=lambda x: x, txt2img=lambda x: x, imgproc=lambda
txt2img_dimensions_info_text_box = gr.Textbox(
label="Aspect ratio (4:3 = 1.333 | 16:9 = 1.777 | 21:9 = 2.333)")
with gr.Column():
with gr.Column(elem_id="text2img_col2"):
with gr.Box():
output_txt2img_gallery = gr.Gallery(label="Images", elem_id="txt2img_gallery_output").style(
grid=[4, 4])
@ -312,7 +312,7 @@ def draw_gradio_ui(opt, img2img=lambda x: x, txt2img=lambda x: x, imgproc=lambda
label='Batch count (how many batches of images to generate)',
value=img2img_defaults['n_iter'])
img2img_dimensions_info_text_box = gr.Textbox(
label="Aspect ratio (4:3 = 1.333 | 16:9 = 1.777 | 21:9 = 2.333)")
label="Aspect ratio (4:3 = 1.333 | 16:9 = 1.777 | 21:9 = 2.333)", lines="2")
with gr.Column():
img2img_steps = gr.Slider(minimum=1, maximum=250, step=1, label="Sampling Steps",
value=img2img_defaults['ddim_steps'])
@ -499,11 +499,11 @@ def draw_gradio_ui(opt, img2img=lambda x: x, txt2img=lambda x: x, imgproc=lambda
if GFPGAN is None:
gr.HTML("""
<div id="90" style="max-width: 100%; font-size: 14px; text-align: center;" class="output-markdown gr-prose border-solid border border-gray-200 rounded gr-panel">
<p><b> Please download GFPGAN to activate face fixing features</b>, instructions are available at the <a href='https://github.com/hlky/stable-diffusion-webui'>Github</a></p>
<p><b> Please download GFPGAN to activate face fixing features</b>, instructions are available at the <a href='https://github.com/Sygil-Dev/sygil-webui'>Github</a></p>
</div>
""")
# gr.Markdown("")
# gr.Markdown("<b> Please download GFPGAN to activate face fixing features</b>, instructions are available at the <a href='https://github.com/hlky/stable-diffusion-webui'>Github</a>")
# gr.Markdown("<b> Please download GFPGAN to activate face fixing features</b>, instructions are available at the <a href='https://github.com/Sygil-Dev/sygil-webui'>Github</a>")
with gr.Column():
gr.Markdown("<b>GFPGAN Settings</b>")
imgproc_gfpgan_strength = gr.Slider(minimum=0.0, maximum=1.0, step=0.001,
@ -517,7 +517,7 @@ def draw_gradio_ui(opt, img2img=lambda x: x, txt2img=lambda x: x, imgproc=lambda
else:
gr.HTML("""
<div id="90" style="max-width: 100%; font-size: 14px; text-align: center;" class="output-markdown gr-prose border-solid border border-gray-200 rounded gr-panel">
<p><b> Please download LDSR to activate more upscale features</b>, instructions are available at the <a href='https://github.com/hlky/stable-diffusion-webui'>Github</a></p>
<p><b> Please download LDSR to activate more upscale features</b>, instructions are available at the <a href='https://github.com/Sygil-Dev/sygil-webui'>Github</a></p>
</div>
""")
upscaleModes = ['RealESRGAN', 'GoBig']
@ -627,7 +627,7 @@ def draw_gradio_ui(opt, img2img=lambda x: x, txt2img=lambda x: x, imgproc=lambda
# seperator
gr.HTML("""
<div id="90" style="max-width: 100%; font-size: 14px; text-align: center;" class="output-markdown gr-prose border-solid border border-gray-200 rounded gr-panel">
<p><b> Please download RealESRGAN to activate upscale features</b>, instructions are available at the <a href='https://github.com/hlky/stable-diffusion-webui'>Github</a></p>
<p><b> Please download RealESRGAN to activate upscale features</b>, instructions are available at the <a href='https://github.com/Sygil-Dev/sygil-webui'>Github</a></p>
</div>
""")
imgproc_toggles.change(fn=uifn.toggle_options_gfpgan, inputs=[imgproc_toggles], outputs=[gfpgan_group])
@ -860,9 +860,9 @@ def draw_gradio_ui(opt, img2img=lambda x: x, txt2img=lambda x: x, imgproc=lambda
"""
gr.HTML("""
<div id="90" style="max-width: 100%; font-size: 14px; text-align: center;" class="output-markdown gr-prose border-solid border border-gray-200 rounded gr-panel">
<p>For help and advanced usage guides, visit the <a href="https://github.com/hlky/stable-diffusion-webui/wiki" target="_blank">Project Wiki</a></p>
<p>Stable Diffusion WebUI is an open-source project. You can find the latest stable builds on the <a href="https://github.com/hlky/stable-diffusion" target="_blank">main repository</a>.
If you would like to contribute to development or test bleeding edge builds, you can visit the <a href="https://github.com/hlky/stable-diffusion-webui" target="_blank">developement repository</a>.</p>
<p>For help and advanced usage guides, visit the <a href="https://github.com/Sygil-Dev/sygil-webui/wiki" target="_blank">Project Wiki</a></p>
<p>Stable Diffusion WebUI is an open-source project. You can find the latest stable builds on the <a href="https://github.com/Sygil-Dev/stable-diffusion" target="_blank">main repository</a>.
If you would like to contribute to development or test bleeding edge builds, you can visit the <a href="https://github.com/Sygil-Dev/sygil-webui" target="_blank">developement repository</a>.</p>
<p>Device ID {current_device_index}: {current_device_name}<br/>{total_device_count} total devices</p>
</div>
""".format(current_device_name=torch.cuda.get_device_name(), current_device_index=torch.cuda.current_device(), total_device_count=torch.cuda.device_count()))

View File

@ -1,6 +1,6 @@
# This file is part of stable-diffusion-webui (https://github.com/sd-webui/stable-diffusion-webui/).
# This file is part of sygil-webui (https://github.com/Sygil-Dev/sygil-webui/).
# Copyright 2022 sd-webui team.
# Copyright 2022 Sygil-Dev team.
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as published by
# the Free Software Foundation, either version 3 of the License, or

View File

@ -1,6 +1,6 @@
# This file is part of stable-diffusion-webui (https://github.com/sd-webui/stable-diffusion-webui/).
# This file is part of sygil-webui (https://github.com/Sygil-Dev/sygil-webui/).
# Copyright 2022 sd-webui team.
# Copyright 2022 Sygil-Dev team.
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as published by
# the Free Software Foundation, either version 3 of the License, or

View File

@ -1,6 +1,6 @@
# This file is part of stable-diffusion-webui (https://github.com/sd-webui/stable-diffusion-webui/).
# This file is part of sygil-webui (https://github.com/Sygil-Dev/sygil-webui/).
# Copyright 2022 sd-webui team.
# Copyright 2022 Sygil-Dev team.
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as published by
# the Free Software Foundation, either version 3 of the License, or

View File

@ -1,7 +1,7 @@
@echo off
:: This file is part of stable-diffusion-webui (https://github.com/sd-webui/stable-diffusion-webui/).
:: This file is part of sygil-webui (https://github.com/Sygil-Dev/sygil-webui/).
:: Copyright 2022 sd-webui team.
:: Copyright 2022 Sygil-Dev team.
:: This program is free software: you can redistribute it and/or modify
:: it under the terms of the GNU Affero General Public License as published by
:: the Free Software Foundation, either version 3 of the License, or
@ -58,20 +58,23 @@ IF "%v_conda_path%"=="" (
:CONDA_FOUND
echo Stashing local changes and pulling latest update...
git status --porcelain=1 -uno | findstr . && set "HasChanges=1" || set "HasChanges=0"
call git stash
call git pull
IF "%HasChanges%" == "0" GOTO SKIP_RESTORE
set /P restore="Do you want to restore changes you made before updating? (Y/N): "
IF /I "%restore%" == "N" (
echo Removing changes please wait...
echo Removing changes...
call git stash drop
echo Changes removed, press any key to continue...
pause >nul
echo "Changes removed"
) ELSE IF /I "%restore%" == "Y" (
echo Restoring changes, please wait...
echo Restoring changes...
call git stash pop --quiet
echo Changes restored, press any key to continue...
pause >nul
echo "Changes restored"
)
:SKIP_RESTORE
call "%v_conda_path%\Scripts\activate.bat"
for /f "delims=" %%a in ('git log -1 --format^="%%H" -- environment.yaml') DO set v_cur_hash=%%a

View File

@ -1,7 +1,7 @@
#!/bin/bash -i
# This file is part of stable-diffusion-webui (https://github.com/sd-webui/stable-diffusion-webui/).
# This file is part of sygil-webui (https://github.com/Sygil-Dev/sygil-webui/).
# Copyright 2022 sd-webui team.
# Copyright 2022 Sygil-Dev team.
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
@ -30,7 +30,7 @@ LSDR_CONFIG="https://heibox.uni-heidelberg.de/f/31a76b13ea27482981b4/?dl=1"
LSDR_MODEL="https://heibox.uni-heidelberg.de/f/578df07c8fc04ffbadf3/?dl=1"
REALESRGAN_MODEL="https://github.com/xinntao/Real-ESRGAN/releases/download/v0.1.0/RealESRGAN_x4plus.pth"
REALESRGAN_ANIME_MODEL="https://github.com/xinntao/Real-ESRGAN/releases/download/v0.2.2.4/RealESRGAN_x4plus_anime_6B.pth"
SD_CONCEPT_REPO="https://github.com/sd-webui/sd-concepts-library/archive/refs/heads/main.zip"
SD_CONCEPT_REPO="https://github.com/Sygil-Dev/sd-concepts-library/archive/refs/heads/main.zip"
if [[ -f $ENV_MODIFED_FILE ]]; then
@ -91,7 +91,7 @@ sd_model_loading () {
printf "AI Model already in place. Continuing...\n\n"
else
printf "\n\n########## MOVE MODEL FILE ##########\n\n"
printf "Please download the 1.4 AI Model from Huggingface (or another source) and place it inside of the stable-diffusion-webui folder\n\n"
printf "Please download the 1.4 AI Model from Huggingface (or another source) and place it inside of the sygil-webui folder\n\n"
read -p "Once you have sd-v1-4.ckpt in the project root, Press Enter...\n\n"
# Check to make sure checksum of models is the original one from HuggingFace and not a fake model set
@ -162,7 +162,7 @@ start_initialization () {
echo "Your model file does not exist! Place it in 'models/ldm/stable-diffusion-v1' with the name 'model.ckpt'."
exit 1
fi
printf "\nStarting Stable Horde Bridg: Please Wait...\n"; python scripts/relauncher.py --bridge -v "$@"; break;
printf "\nStarting Stable Horde Bridge: Please Wait...\n"; python scripts/relauncher.py --bridge -v "$@"; break;
}

View File

@ -0,0 +1,28 @@
#!/bin/bash
# For developers only! Not for users!
# This creates the installer zip files that will be distributed to users
# It packs install.{sh,bat} along with a readme, and ensures that the user
# has the install script inside a new empty folder (after unzipping),
# otherwise the git repo will extract into whatever folder the script is in.
cd "$(dirname "${BASH_SOURCE[0]}")"
# make the installer zip for linux and mac
rm -rf sygil
mkdir -p sygil
cp install.sh sygil
cp readme.txt sygil
zip -r sygil-linux.zip sygil
zip -r sygil-mac.zip sygil
# make the installer zip for windows
rm -rf sygil
mkdir -p sygil
cp install.bat sygil
cp readme.txt sygil
zip -r sygil-windows.zip sygil
echo "The installer zips are ready to be distributed.."

96
installer/install.bat Normal file
View File

@ -0,0 +1,96 @@
@echo off
@rem This script will install git and conda (if not found on the PATH variable)
@rem using micromamba (an 8mb static-linked single-file binary, conda replacement).
@rem For users who already have git and conda, this step will be skipped.
@rem Then, it'll run the webui.cmd file to continue with the installation as usual.
@rem This enables a user to install this project without manually installing conda and git.
echo "Installing Sygil WebUI.."
echo.
@rem config
set MAMBA_ROOT_PREFIX=%cd%\installer_files\mamba
set INSTALL_ENV_DIR=%cd%\installer_files\env
set MICROMAMBA_DOWNLOAD_URL=https://github.com/cmdr2/stable-diffusion-ui/releases/download/v1.1/micromamba.exe
set REPO_URL=https://github.com/Sygil-Dev/sygil-webui.git
@rem Change the download URL to Sygil repo's release URL
@rem We need to mirror micromamba.exe, because the official download URL uses tar.bz2 compression
@rem which Windows can't unzip natively.
@rem https://mamba.readthedocs.io/en/latest/installation.html#windows
set umamba_exists=F
@rem figure out whether git and conda needs to be installed
if exist "%INSTALL_ENV_DIR%" set PATH=%INSTALL_ENV_DIR%;%INSTALL_ENV_DIR%\Library\bin;%INSTALL_ENV_DIR%\Scripts;%INSTALL_ENV_DIR%\Library\usr\bin;%PATH%
set PACKAGES_TO_INSTALL=
call conda --version >.tmp1 2>.tmp2
if "%ERRORLEVEL%" NEQ "0" set PACKAGES_TO_INSTALL=%PACKAGES_TO_INSTALL% conda
call git --version >.tmp1 2>.tmp2
if "%ERRORLEVEL%" NEQ "0" set PACKAGES_TO_INSTALL=%PACKAGES_TO_INSTALL% git
call "%MAMBA_ROOT_PREFIX%\micromamba.exe" --version >.tmp1 2>.tmp2
if "%ERRORLEVEL%" EQU "0" set umamba_exists=T
@rem (if necessary) install git and conda into a contained environment
if "%PACKAGES_TO_INSTALL%" NEQ "" (
@rem download micromamba
if "%umamba_exists%" == "F" (
echo "Downloading micromamba from %MICROMAMBA_DOWNLOAD_URL% to %MAMBA_ROOT_PREFIX%\micromamba.exe"
mkdir "%MAMBA_ROOT_PREFIX%"
call curl -L "%MICROMAMBA_DOWNLOAD_URL%" > "%MAMBA_ROOT_PREFIX%\micromamba.exe"
@rem test the mamba binary
echo Micromamba version:
call "%MAMBA_ROOT_PREFIX%\micromamba.exe" --version
)
@rem create the installer env
if not exist "%INSTALL_ENV_DIR%" (
call "%MAMBA_ROOT_PREFIX%\micromamba.exe" create -y --prefix "%INSTALL_ENV_DIR%"
)
echo "Packages to install:%PACKAGES_TO_INSTALL%"
call "%MAMBA_ROOT_PREFIX%\micromamba.exe" install -y --prefix "%INSTALL_ENV_DIR%" -c conda-forge %PACKAGES_TO_INSTALL%
if not exist "%INSTALL_ENV_DIR%" (
echo "There was a problem while installing%PACKAGES_TO_INSTALL% using micromamba. Cannot continue."
pause
exit /b
)
)
set PATH=%INSTALL_ENV_DIR%;%INSTALL_ENV_DIR%\Library\bin;%INSTALL_ENV_DIR%\Scripts;%INSTALL_ENV_DIR%\Library\usr\bin;%PATH%
@rem get the repo (and load into the current directory)
if not exist ".git" (
call git config --global init.defaultBranch master
call git init
call git remote add origin %REPO_URL%
call git fetch
call git checkout origin/master -ft
)
@rem activate the base env
call conda activate
@rem make the models dir
mkdir models\ldm\stable-diffusion-v1
@rem install the project
call webui.cmd
@rem finally, tell the user that they need to download the ckpt
echo.
echo "Now you need to install the weights for the stable diffusion model."
echo "Please follow the steps related to models weights at https://sd-webui.github.io/stable-diffusion-webui/docs/1.windows-installation.html#cloning-the-repo to complete the installation"
@rem it would be nice if the weights downloaded automatically, and didn't need the user to do this manually.
pause

90
installer/install.sh Executable file
View File

@ -0,0 +1,90 @@
#!/bin/bash
# This script will install git and conda (if not found on the PATH variable)
# using micromamba (an 8mb static-linked single-file binary, conda replacement).
# For users who already have git and conda, this step will be skipped.
# Then, it'll run the webui.cmd file to continue with the installation as usual.
# This enables a user to install this project without manually installing conda and git.
cd "$(dirname "${BASH_SOURCE[0]}")"
echo "Installing Sygil WebUI.."
echo ""
OS_ARCH=$(uname -m)
case "${OS_ARCH}" in
x86_64*) OS_ARCH="64";;
arm64*) OS_ARCH="aarch64";;
*) echo "Unknown system architecture: $OS_ARCH! This script runs only on x86_64 or arm64" && exit
esac
# config
export MAMBA_ROOT_PREFIX="$(pwd)/installer_files/mamba"
INSTALL_ENV_DIR="$(pwd)/installer_files/env"
MICROMAMBA_DOWNLOAD_URL="https://micro.mamba.pm/api/micromamba/linux-${OS_ARCH}/latest"
umamba_exists="F"
# figure out whether git and conda needs to be installed
if [ -e "$INSTALL_ENV_DIR" ]; then export PATH="$INSTALL_ENV_DIR/bin:$PATH"; fi
PACKAGES_TO_INSTALL=""
if ! hash "conda" &>/dev/null; then PACKAGES_TO_INSTALL="$PACKAGES_TO_INSTALL conda"; fi
if ! hash "git" &>/dev/null; then PACKAGES_TO_INSTALL="$PACKAGES_TO_INSTALL git"; fi
if "$MAMBA_ROOT_PREFIX/micromamba" --version &>/dev/null; then umamba_exists="T"; fi
# (if necessary) install git and conda into a contained environment
if [ "$PACKAGES_TO_INSTALL" != "" ]; then
# download micromamba
if [ "$umamba_exists" == "F" ]; then
echo "Downloading micromamba from $MICROMAMBA_DOWNLOAD_URL to $MAMBA_ROOT_PREFIX/micromamba"
mkdir -p "$MAMBA_ROOT_PREFIX"
curl -L "$MICROMAMBA_DOWNLOAD_URL" | tar -xvj bin/micromamba -O > "$MAMBA_ROOT_PREFIX/micromamba"
chmod u+x "$MAMBA_ROOT_PREFIX/micromamba"
# test the mamba binary
echo "Micromamba version:"
"$MAMBA_ROOT_PREFIX/micromamba" --version
fi
# create the installer env
if [ ! -e "$INSTALL_ENV_DIR" ]; then
"$MAMBA_ROOT_PREFIX/micromamba" create -y --prefix "$INSTALL_ENV_DIR"
fi
echo "Packages to install:$PACKAGES_TO_INSTALL"
"$MAMBA_ROOT_PREFIX/micromamba" install -y --prefix "$INSTALL_ENV_DIR" -c conda-forge $PACKAGES_TO_INSTALL
if [ ! -e "$INSTALL_ENV_DIR" ]; then
echo "There was a problem while initializing micromamba. Cannot continue."
exit
fi
fi
if [ -e "$INSTALL_ENV_DIR" ]; then export PATH="$INSTALL_ENV_DIR/bin:$PATH"; fi
CONDA_BASEPATH=$(conda info --base)
source "$CONDA_BASEPATH/etc/profile.d/conda.sh" # otherwise conda complains about 'shell not initialized' (needed when running in a script)
conda activate
# run the installer script for linux
curl "https://raw.githubusercontent.com/JoshuaKimsey/Linux-StableDiffusion-Script/main/linux-sd.sh" > linux-sd.sh
chmod u+x linux-sd.sh
./linux-sd.sh
# tell the user that they need to download the ckpt
WEIGHTS_DOC_URL="https://sd-webui.github.io/stable-diffusion-webui/docs/2.linux-installation.html#initial-start-guide"
echo ""
echo "Now you need to install the weights for the stable diffusion model."
echo "Please follow the steps at $WEIGHTS_DOC_URL to complete the installation"
# it would be nice if the weights downloaded automatically, and didn't need the user to do this manually.

11
installer/readme.txt Normal file
View File

@ -0,0 +1,11 @@
Sygil WebUI
Project homepage: https://github.com/Sygil-Dev/sygil-webui
Installation on Windows:
Please double-click the 'install.bat' file (while keeping it inside the sygil folder).
Installation on Linux:
Please open the terminal, and run './install.sh' (while keeping it inside the sygil folder).
After installation, please run the 'webui.cmd' file (on Windows) or 'webui.sh' file (on Linux/Mac) to start Sygil.

View File

@ -0,0 +1,55 @@
import k_diffusion as K
import torch
import torch.nn as nn
class KDiffusionSampler:
def __init__(self, m, sampler, callback=None):
self.model = m
self.model_wrap = K.external.CompVisDenoiser(m)
self.schedule = sampler
self.generation_callback = callback
def get_sampler_name(self):
return self.schedule
def sample(self, S, conditioning, unconditional_guidance_scale, unconditional_conditioning, x_T):
sigmas = self.model_wrap.get_sigmas(S)
x = x_T * sigmas[0]
model_wrap_cfg = CFGDenoiser(self.model_wrap)
samples_ddim = None
samples_ddim = K.sampling.__dict__[f'sample_{self.schedule}'](
model_wrap_cfg, x, sigmas,
extra_args={'cond': conditioning, 'uncond': unconditional_conditioning,'cond_scale': unconditional_guidance_scale},
disable=False, callback=self.generation_callback)
#
return samples_ddim, None
class CFGMaskedDenoiser(nn.Module):
def __init__(self, model):
super().__init__()
self.inner_model = model
def forward(self, x, sigma, uncond, cond, cond_scale, mask, x0, xi):
x_in = x
x_in = torch.cat([x_in] * 2)
sigma_in = torch.cat([sigma] * 2)
cond_in = torch.cat([uncond, cond])
uncond, cond = self.inner_model(x_in, sigma_in, cond=cond_in).chunk(2)
denoised = uncond + (cond - uncond) * cond_scale
if mask is not None:
assert x0 is not None
img_orig = x0
mask_inv = 1. - mask
denoised = (img_orig * mask_inv) + (mask * denoised)
return denoised
class CFGDenoiser(nn.Module):
def __init__(self, model):
super().__init__()
self.inner_model = model
def forward(self, x, sigma, uncond, cond, cond_scale):
x_in = torch.cat([x] * 2)
sigma_in = torch.cat([sigma] * 2)
cond_in = torch.cat([uncond, cond])
uncond, cond = self.inner_model(x_in, sigma_in, cond=cond_in).chunk(2)
return uncond + (cond - uncond) * cond_scale

View File

@ -1,31 +1,28 @@
transformers==4.19.2 # do not change
diffusers==0.3.0
invisible-watermark==0.1.5
pytorch_lightning==1.7.7
open-clip-torch
loguru
taming-transformers-rom1504==0.0.6 # required by ldm
wget
-e .
# See: https://github.com/CompVis/taming-transformers/issues/176
# -e git+https://github.com/CompVis/taming-transformers.git@master#egg=taming-transformers # required by ldm
# Note: taming package needs to be installed with -e option
-e git+https://github.com/CompVis/taming-transformers#egg=taming-transformers
invisible-watermark==0.1.5
taming-transformers-rom1504==0.0.6 # required by ldm
# Note: K-diffusion brings in CLIP 1.0 as a dependency automatically; will create a dependency resolution conflict when explicitly specified together
git+https://github.com/openai/CLIP.git@main#egg=clip
git+https://github.com/crowsonkb/k-diffusion.git
# Note: K-diffusion brings in CLIP 1.0 as a dependency automatically; will create a dependency resolution conflict when explicitly specified together
# git+https://github.com/openai/CLIP.git@main#egg=clip
# git+https://github.com/hlky/k-diffusion-sd#egg=k_diffusion
# Dependencies required for Stable Diffusion UI
pynvml==11.4.1
omegaconf==2.2.3
Jinja2==3.1.2 # Jinja2 is required by Gradio
# Note: Jinja2 3.x major version required due to breaking changes found in markupsafe==2.1.1; 2.0.1 is incompatible with other upstream dependencies
# see https://github.com/pallets/markupsafe/issues/304
Jinja2==3.1.2 # Jinja2 is required by Gradio
# Environment Dependencies for WebUI (gradio)
gradio==3.4
gradio==3.4.1
# Environment Dependencies for WebUI (streamlit)
streamlit==1.13.0
@ -34,8 +31,24 @@ streamlit-option-menu==0.3.2
streamlit_nested_layout==0.1.1
streamlit-server-state==0.14.2
streamlit-tensorboard==0.0.2
streamlit-elements==0.1.* # used for the draggable dashboard and new UI design (WIP)
streamlit-ace==0.1.1 # used to replace the text area on the prompt and also for the code editor tool.
hydralit==1.0.14
hydralit_components==1.0.10
stqdm==0.0.4
uvicorn
fastapi
jsonmerge==1.8.
matplotlib==3.6.
resize-right==0.0.2
torchdiffeq==0.2.3
# txt2vid
diffusers==0.6.0
librosa==0.9.2
# img2img inpainting
streamlit-drawable-canvas==0.9.2
# Img2text
ftfy==6.1.1
@ -45,11 +58,31 @@ timm==0.6.7
tqdm==4.64.0
tensorboard==2.10.1
# Other
retry==0.9.2 # used by sdutils
python-slugify==6.1.2 # used by sdutils
piexif==1.1.3 # used by sdutils
retry==0.9.2 # used by sd_utils
python-slugify==6.1.2 # used by sd_utils
piexif==1.1.3 # used by sd_utils
pywebview==3.6.3 # used by streamlit_webview.py
accelerate==0.12.0
albumentations==0.4.3
einops==0.3.1
facexlib>=0.2.3
imageio-ffmpeg==0.4.2
imageio==2.9.0
kornia==0.6
loguru
opencv-python-headless==4.6.0.66
open-clip-torch==2.0.2
pandas==1.4.3
pudb==2019.2
pytorch-lightning==1.7.7
realesrgan==0.3.0
test-tube>=0.7.5
timm==0.6.7
torch-fidelity==0.3.0
transformers==4.19.2 # do not change
wget
# Optional packages commonly used with Stable Diffusion workflow
@ -57,11 +90,14 @@ piexif==1.1.3 # used by sdutils
basicsr==1.4.2 # required by RealESRGAN
gfpgan==1.3.8 # GFPGAN
realesrgan==0.3.0 # RealESRGAN brings in GFPGAN as a requirement
-e git+https://github.com/devilismyfriend/latent-diffusion#egg=latent-diffusion #ldsr
-e git+https://github.com/devilismyfriend/latent-diffusion#egg=latent-diffusion
## for monocular depth estimation
tensorflow==2.10.0
# Unused Packages: No current usage but will be used in the future.
# Orphaned Packages: No usage found

View File

@ -1,7 +1,7 @@
#!/bin/bash
# This file is part of stable-diffusion-webui (https://github.com/sd-webui/stable-diffusion-webui/).
# This file is part of sygil-webui (https://github.com/Sygil-Dev/sygil-webui/).
# Copyright 2022 sd-webui team.
# Copyright 2022 Sygil-Dev team.
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as published by
# the Free Software Foundation, either version 3 of the License, or

36
scripts/APIServer.py Normal file
View File

@ -0,0 +1,36 @@
# This file is part of sygil-webui (https://github.com/Sygil-Dev/sygil-webui/).
# Copyright 2022 Sygil-Dev team.
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
# You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
# base webui import and utils.
#from sd_utils import *
from sd_utils import *
# streamlit imports
#streamlit components section
#other imports
import os, time, requests
import sys
#from fastapi import FastAPI
#import uvicorn
# Temp imports
# end of imports
#---------------------------------------------------------------------------------------------------------------
def layout():
st.info("Under Construction. :construction_worker:")

View File

@ -1,6 +1,6 @@
# This file is part of stable-diffusion-webui (https://github.com/sd-webui/stable-diffusion-webui/).
# This file is part of sygil-webui (https://github.com/Sygil-Dev/sygil-webui/).
# Copyright 2022 sd-webui team.
# Copyright 2022 Sygil-Dev team.
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
@ -12,15 +12,18 @@
# GNU Affero General Public License for more details.
# You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
# along with this program. If not, see <http://www.gnu.org/licenses/>.
# base webui import and utils.
from sd_utils import *
# streamlit imports
#other imports
from requests.auth import HTTPBasicAuth
from requests import HTTPError
from stqdm import stqdm
# Temp imports
# Temp imports
# end of imports
@ -28,19 +31,40 @@ from sd_utils import *
def download_file(file_name, file_path, file_url):
if not os.path.exists(file_path):
os.makedirs(file_path)
if not os.path.exists(file_path + '/' + file_name):
if not os.path.exists(os.path.join(file_path , file_name)):
print('Downloading ' + file_name + '...')
# TODO - add progress bar in streamlit
# download file with `requests``
with requests.get(file_url, stream=True) as r:
r.raise_for_status()
with open(file_path + '/' + file_name, 'wb') as f:
for chunk in r.iter_content(chunk_size=8192):
f.write(chunk)
if file_name == "Stable Diffusion v1.5":
if "huggingface_token" not in st.session_state or st.session_state["defaults"].general.huggingface_token == "None":
if "progress_bar_text" in st.session_state:
st.session_state["progress_bar_text"].error(
"You need a huggingface token in order to use the Text to Video tab. Use the Settings page from the sidebar on the left to add your token."
)
raise OSError("You need a huggingface token in order to use the Text to Video tab. Use the Settings page from the sidebar on the left to add your token.")
try:
with requests.get(file_url, auth = HTTPBasicAuth('token', st.session_state.defaults.general.huggingface_token) if "huggingface.co" in file_url else None, stream=True) as r:
r.raise_for_status()
with open(os.path.join(file_path, file_name), 'wb') as f:
for chunk in stqdm(r.iter_content(chunk_size=8192), backend=True, unit="kb"):
f.write(chunk)
except HTTPError as e:
if "huggingface.co" in file_url:
if "resolve"in file_url:
repo_url = file_url.split("resolve")[0]
st.session_state["progress_bar_text"].error(
f"You need to accept the license for the model in order to be able to download it. "
f"Please visit {repo_url} and accept the lincense there, then try again to download the model.")
logger.error(e)
else:
print(file_name + ' already exists.')
def download_model(models, model_name):
""" Download all files from model_list[model_name] """
for file in models[model_name]:
@ -50,18 +74,18 @@ def download_model(models, model_name):
def layout():
#search = st.text_input(label="Search", placeholder="Type the name of the model you want to search for.", help="")
colms = st.columns((1, 3, 5, 5))
columns = ["",'Model Name','Save Location','Download Link']
colms = st.columns((1, 3, 3, 5, 5))
columns = ["", 'Model Name', 'Save Location', "Download", 'Download Link']
models = st.session_state["defaults"].model_manager.models
for col, field_name in zip(colms, columns):
# table header
col.write(field_name)
for x, model_name in enumerate(models):
col1, col2, col3, col4 = st.columns((1, 3, 4, 6))
col1, col2, col3, col4, col5 = st.columns((1, 3, 3, 3, 6))
col1.write(x) # index
col2.write(models[model_name]['model_name'])
col3.write(models[model_name]['save_location'])
@ -69,16 +93,16 @@ def layout():
files_exist = 0
for file in models[model_name]['files']:
if "save_location" in models[model_name]['files'][file]:
os.path.exists(models[model_name]['files'][file]['save_location'] + '/' + models[model_name]['files'][file]['file_name'])
os.path.exists(os.path.join(models[model_name]['files'][file]['save_location'] , models[model_name]['files'][file]['file_name']))
files_exist += 1
elif os.path.exists(models[model_name]['save_location'] + '/' + models[model_name]['files'][file]['file_name']):
elif os.path.exists(os.path.join(models[model_name]['save_location'] , models[model_name]['files'][file]['file_name'])):
files_exist += 1
files_needed = []
for file in models[model_name]['files']:
if "save_location" in models[model_name]['files'][file]:
if not os.path.exists(models[model_name]['files'][file]['save_location'] + '/' + models[model_name]['files'][file]['file_name']):
if not os.path.exists(os.path.join(models[model_name]['files'][file]['save_location'] , models[model_name]['files'][file]['file_name'])):
files_needed.append(file)
elif not os.path.exists(models[model_name]['save_location'] + '/' + models[model_name]['files'][file]['file_name']):
elif not os.path.exists(os.path.join(models[model_name]['save_location'] , models[model_name]['files'][file]['file_name'])):
files_needed.append(file)
if len(files_needed) > 0:
if st.button('Download', key=models[model_name]['model_name'], help='Download ' + models[model_name]['model_name']):
@ -87,7 +111,10 @@ def layout():
download_file(models[model_name]['files'][file]['file_name'], models[model_name]['files'][file]['save_location'], models[model_name]['files'][file]['download_link'])
else:
download_file(models[model_name]['files'][file]['file_name'], models[model_name]['save_location'], models[model_name]['files'][file]['download_link'])
st.experimental_rerun()
else:
st.empty()
else:
st.write('')
st.write('')
#

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1 @@
from logger import set_logger_verbosity, quiesce_logger

View File

@ -0,0 +1,97 @@
# This file is part of sygil-webui (https://github.com/Sygil-Dev/sandbox-webui/).
# Copyright 2022 Sygil-Dev team.
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
# You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
# base webui import and utils.
#from sd_utils import *
from sd_utils import *
# streamlit imports
#streamlit components section
#other imports
import os, time, requests
import sys
from barfi import st_barfi, barfi_schemas, Block
# Temp imports
# end of imports
#---------------------------------------------------------------------------------------------------------------
def layout():
#st.info("Under Construction. :construction_worker:")
#from barfi import st_barfi, Block
#add = Block(name='Addition')
#sub = Block(name='Subtraction')
#mul = Block(name='Multiplication')
#div = Block(name='Division')
#barfi_result = st_barfi(base_blocks= [add, sub, mul, div])
# or if you want to use a category to organise them in the frontend sub-menu
#barfi_result = st_barfi(base_blocks= {'Op 1': [add, sub], 'Op 2': [mul, div]})
col1, col2, col3 = st.columns([1, 8, 1])
from barfi import st_barfi, barfi_schemas, Block
with col2:
feed = Block(name='Feed')
feed.add_output()
def feed_func(self):
self.set_interface(name='Output 1', value=4)
feed.add_compute(feed_func)
splitter = Block(name='Splitter')
splitter.add_input()
splitter.add_output()
splitter.add_output()
def splitter_func(self):
in_1 = self.get_interface(name='Input 1')
value = (in_1/2)
self.set_interface(name='Output 1', value=value)
self.set_interface(name='Output 2', value=value)
splitter.add_compute(splitter_func)
mixer = Block(name='Mixer')
mixer.add_input()
mixer.add_input()
mixer.add_output()
def mixer_func(self):
in_1 = self.get_interface(name='Input 1')
in_2 = self.get_interface(name='Input 2')
value = (in_1 + in_2)
self.set_interface(name='Output 1', value=value)
mixer.add_compute(mixer_func)
result = Block(name='Result')
result.add_input()
def result_func(self):
in_1 = self.get_interface(name='Input 1')
result.add_compute(result_func)
load_schema = st.selectbox('Select a saved schema:', barfi_schemas())
compute_engine = st.checkbox('Activate barfi compute engine', value=False)
barfi_result = st_barfi(base_blocks=[feed, result, mixer, splitter],
compute_engine=compute_engine, load_schema=load_schema)
if barfi_result:
st.write(barfi_result)

View File

@ -0,0 +1,11 @@
import os
import streamlit.components.v1 as components
def load(pixel_per_step = 50):
parent_dir = os.path.dirname(os.path.abspath(__file__))
file = os.path.join(parent_dir, "main.js")
with open(file) as f:
javascript_main = f.read()
javascript_main = javascript_main.replace("%%pixelPerStep%%",str(pixel_per_step))
components.html(f"<script>{javascript_main}</script>")

View File

@ -0,0 +1,192 @@
// iframe parent
var parentDoc = window.parent.document
// check for mouse pointer locking support, not a requirement but improves the overall experience
var havePointerLock = 'pointerLockElement' in parentDoc ||
'mozPointerLockElement' in parentDoc ||
'webkitPointerLockElement' in parentDoc;
// the pointer locking exit function
parentDoc.exitPointerLock = parentDoc.exitPointerLock || parentDoc.mozExitPointerLock || parentDoc.webkitExitPointerLock;
// how far should the mouse travel for a step in pixel
var pixelPerStep = %%pixelPerStep%%;
// how many steps did the mouse move in as float
var movementDelta = 0.0;
// value when drag started
var lockedValue = 0.0;
// minimum value from field
var lockedMin = 0.0;
// maximum value from field
var lockedMax = 0.0;
// how big should the field steps be
var lockedStep = 0.0;
// the currently locked in field
var lockedField = null;
// lock box to just request pointer lock for one element
var lockBox = document.createElement("div");
lockBox.classList.add("lockbox");
parentDoc.body.appendChild(lockBox);
lockBox.requestPointerLock = lockBox.requestPointerLock || lockBox.mozRequestPointerLock || lockBox.webkitRequestPointerLock;
function Lock(field)
{
var rect = field.getBoundingClientRect();
lockBox.style.left = (rect.left-2.5)+"px";
lockBox.style.top = (rect.top-2.5)+"px";
lockBox.style.width = (rect.width+2.5)+"px";
lockBox.style.height = (rect.height+5)+"px";
lockBox.requestPointerLock();
}
function Unlock()
{
parentDoc.exitPointerLock();
lockBox.style.left = "0px";
lockBox.style.top = "0px";
lockBox.style.width = "0px";
lockBox.style.height = "0px";
lockedField.focus();
}
parentDoc.addEventListener('mousedown', (e) => {
// if middle is down
if(e.button === 1)
{
if(e.target.tagName === 'INPUT' && e.target.type === 'number')
{
e.preventDefault();
var field = e.target;
if(havePointerLock)
Lock(field);
// save current field
lockedField = e.target;
// add class for styling
lockedField.classList.add("value-dragging");
// reset movement delta
movementDelta = 0.0;
// set to 0 if field is empty
if(lockedField.value === '')
lockedField.value = 0.0;
// save current field value
lockedValue = parseFloat(lockedField.value);
if(lockedField.min === '' || lockedField.min === '-Infinity')
lockedMin = -99999999.0;
else
lockedMin = parseFloat(lockedField.min);
if(lockedField.max === '' || lockedField.max === 'Infinity')
lockedMax = 99999999.0;
else
lockedMax = parseFloat(lockedField.max);
if(lockedField.step === '' || lockedField.step === 'Infinity')
lockedStep = 1.0;
else
lockedStep = parseFloat(lockedField.step);
// lock pointer if available
if(havePointerLock)
Lock(lockedField);
// add drag event
parentDoc.addEventListener("mousemove", onDrag, false);
}
}
});
function onDrag(e)
{
if(lockedField !== null)
{
// add movement to delta
movementDelta += e.movementX / pixelPerStep;
if(lockedField === NaN)
return;
// set new value
let value = lockedValue + Math.floor(Math.abs(movementDelta)) * lockedStep * Math.sign(movementDelta);
lockedField.focus();
lockedField.select();
parentDoc.execCommand('insertText', false /*no UI*/, Math.min(Math.max(value, lockedMin), lockedMax));
}
}
parentDoc.addEventListener('mouseup', (e) => {
// if mouse is up
if(e.button === 1)
{
// release pointer lock if available
if(havePointerLock)
Unlock();
if(lockedField !== null && lockedField !== NaN)
{
// stop drag event
parentDoc.removeEventListener("mousemove", onDrag, false);
// remove class for styling
lockedField.classList.remove("value-dragging");
// remove reference
lockedField = null;
}
}
});
// only execute once (even though multiple iframes exist)
if(!parentDoc.hasOwnProperty("dragableInitialized"))
{
var parentCSS =
`
/* Make input-instruction not block mouse events */
.input-instructions,.input-instructions > *{
pointer-events: none;
user-select: none;
-moz-user-select: none;
-khtml-user-select: none;
-webkit-user-select: none;
-o-user-select: none;
}
.lockbox {
background-color: transparent;
position: absolute;
pointer-events: none;
user-select: none;
-moz-user-select: none;
-khtml-user-select: none;
-webkit-user-select: none;
-o-user-select: none;
border-left: dotted 2px rgb(255,75,75);
border-top: dotted 2px rgb(255,75,75);
border-bottom: dotted 2px rgb(255,75,75);
border-right: dotted 1px rgba(255,75,75,0.2);
border-top-left-radius: 0.25rem;
border-bottom-left-radius: 0.25rem;
z-index: 1000;
}
`;
// get parent document head
var head = parentDoc.getElementsByTagName('head')[0];
// add style tag
var s = document.createElement('style');
// set type attribute
s.setAttribute('type', 'text/css');
// add css forwarded from python
if (s.styleSheet) { // IE
s.styleSheet.cssText = parentCSS;
} else { // the world
s.appendChild(document.createTextNode(parentCSS));
}
// add style to head
head.appendChild(s);
// set flag so this only runs once
parentDoc["dragableInitialized"] = true;
}

View File

@ -0,0 +1,46 @@
import os
from collections import defaultdict
import streamlit.components.v1 as components
# where to save the downloaded key_phrases
key_phrases_file = "data/tags/key_phrases.json"
# the loaded key phrase json as text
key_phrases_json = ""
# where to save the downloaded key_phrases
thumbnails_file = "data/tags/thumbnails.json"
# the loaded key phrase json as text
thumbnails_json = ""
def init():
global key_phrases_json, thumbnails_json
with open(key_phrases_file) as f:
key_phrases_json = f.read()
with open(thumbnails_file) as f:
thumbnails_json = f.read()
def suggestion_area(placeholder):
# get component path
parent_dir = os.path.dirname(os.path.abspath(__file__))
# get file paths
javascript_file = os.path.join(parent_dir, "main.js")
stylesheet_file = os.path.join(parent_dir, "main.css")
parent_stylesheet_file = os.path.join(parent_dir, "parent.css")
# load file texts
with open(javascript_file) as f:
javascript_main = f.read()
with open(stylesheet_file) as f:
stylesheet_main = f.read()
with open(parent_stylesheet_file) as f:
parent_stylesheet = f.read()
# add suggestion area div box
html = "<div id='scroll_area' class='st-bg'><div id='suggestion_area'>javascript failed</div></div>"
# add loaded style
html += f"<style>{stylesheet_main}</style>"
# set default variables
html += f"<script>var thumbnails = {thumbnails_json};\nvar keyPhrases = {key_phrases_json};\nvar parentCSS = `{parent_stylesheet}`;\nvar placeholder='{placeholder}';</script>"
# add main java script
html += f"\n<script>{javascript_main}</script>"
# add component to site
components.html(html, width=None, height=None, scrolling=True)

View File

@ -0,0 +1,81 @@
*
{
padding: 0px;
margin: 0px;
user-select: none;
-moz-user-select: none;
-khtml-user-select: none;
-webkit-user-select: none;
-o-user-select: none;
}
body
{
width: 100%;
height: 100%;
padding-left: calc( 1em - 1px );
padding-top: calc( 1em - 1px );
overflow: hidden;
}
/* width */
::-webkit-scrollbar {
width: 7px;
}
/* Track */
::-webkit-scrollbar-track {
background: rgb(10, 13, 19);
}
/* Handle */
::-webkit-scrollbar-thumb {
background: #6c6e72;
border-radius: 3px;
}
/* Handle on hover */
::-webkit-scrollbar-thumb:hover {
background: #6c6e72;
}
#scroll_area
{
display: flex;
overflow-x: hidden;
overflow-y: auto;
}
#suggestion_area
{
overflow-x: hidden;
width: calc( 100% - 2em - 2px );
margin-bottom: calc( 1em + 13px );
min-height: 50px;
}
span
{
border: 1px solid rgba(250, 250, 250, 0.2);
border-radius: 0.25rem;
font-size: 1rem;
font-family: "Source Sans Pro", sans-serif;
background-color: rgb(38, 39, 48);
color: white;
display: inline-block;
padding: 0.5rem;
margin-right: 3px;
cursor: pointer;
user-select: none;
-moz-user-select: none;
-khtml-user-select: none;
-webkit-user-select: none;
-o-user-select: none;
}
span:hover
{
color: rgb(255,75,75);
border-color: rgb(255,75,75);
}

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,84 @@
.suggestion-frame
{
position: absolute;
/* make as small as possible */
margin: 0px;
padding: 0px;
min-height: 0px;
line-height: 0;
/* animate transitions of the height property */
-webkit-transition: height 1s;
-moz-transition: height 1s;
-ms-transition: height 1s;
-o-transition: height 1s;
transition: height 1s, border-bottom-width 1s;
/* block selection */
user-select: none;
-moz-user-select: none;
-khtml-user-select: none;
-webkit-user-select: none;
-o-user-select: none;
z-index: 700;
outline: 1px solid rgba(250, 250, 250, 0.2);
outline-offset: 0px;
border-radius: 0.25rem;
background: rgb(14, 17, 23);
box-sizing: border-box;
-moz-box-sizing: border-box;
-webkit-box-sizing: border-box;
border-bottom: solid 13px rgb(14, 17, 23) !important;
border-left: solid 13px rgb(14, 17, 23) !important;
}
#phrase-tooltip
{
display: none;
pointer-events: none;
position: absolute;
border-bottom-left-radius: 0.5rem;
border-top-right-radius: 0.5rem;
border-bottom-right-radius: 0.5rem;
border: solid rgb(255,75,75) 2px;
background-color: rgb(38, 39, 48);
color: rgb(255,75,75);
font-size: 1rem;
font-family: "Source Sans Pro", sans-serif;
padding: 0.5rem;
cursor: default;
user-select: none;
-moz-user-select: none;
-khtml-user-select: none;
-webkit-user-select: none;
-o-user-select: none;
z-index: 1000;
}
#phrase-tooltip:has(img)
{
transform: scale(1.25, 1.25);
-ms-transform: scale(1.25, 1.25);
-webkit-transform: scale(1.25, 1.25);
}
#phrase-tooltip>img
{
pointer-events: none;
border-bottom-left-radius: 0.5rem;
border-top-right-radius: 0.5rem;
border-bottom-right-radius: 0.5rem;
cursor: default;
user-select: none;
-moz-user-select: none;
-khtml-user-select: none;
-webkit-user-select: none;
-o-user-select: none;
z-index: 1500;
}

View File

@ -1,6 +1,6 @@
# This file is part of stable-diffusion-webui (https://github.com/sd-webui/stable-diffusion-webui/).
# This file is part of sygil-webui (https://github.com/Sygil-Dev/sygil-webui/).
# Copyright 2022 sd-webui team.
# Copyright 2022 Sygil-Dev team.
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as published by
# the Free Software Foundation, either version 3 of the License, or

View File

@ -1,6 +1,6 @@
# This file is part of stable-diffusion-webui (https://github.com/sd-webui/stable-diffusion-webui/).
# This file is part of sygil-webui (https://github.com/Sygil-Dev/sygil-webui/).
# Copyright 2022 sd-webui team.
# Copyright 2022 Sygil-Dev team.
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as published by
# the Free Software Foundation, either version 3 of the License, or

View File

@ -0,0 +1,766 @@
# Copyright (C) 2021 cryzed
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as
# published by the Free Software Foundation, either version 3 of the
# License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
#
# You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
import enum
import json
import os
import typing as T
from collections import abc
import requests
__version__ = "4.0.0"
DEFAULT_API_URL = "http://127.0.0.1:45869/"
HYDRUS_METADATA_ENCODING = "utf-8"
AUTHENTICATION_TIMEOUT_CODE = 419
class HydrusAPIException(Exception):
pass
class ConnectionError(HydrusAPIException, requests.ConnectTimeout):
pass
class APIError(HydrusAPIException):
def __init__(self, response: requests.Response):
super().__init__(response.text)
self.response = response
class MissingParameter(APIError):
pass
class InsufficientAccess(APIError):
pass
class DatabaseLocked(APIError):
pass
class ServerError(APIError):
pass
# Customize IntEnum, so we can just do str(Enum.member) to get the string representation of its value unmodified,
# without users having to access .value explicitly
class StringableIntEnum(enum.IntEnum):
def __str__(self):
return str(self.value)
@enum.unique
class Permission(StringableIntEnum):
IMPORT_URLS = 0
IMPORT_FILES = 1
ADD_TAGS = 2
SEARCH_FILES = 3
MANAGE_PAGES = 4
MANAGE_COOKIES = 5
MANAGE_DATABASE = 6
ADD_NOTES = 7
@enum.unique
class URLType(StringableIntEnum):
POST_URL = 0
FILE_URL = 2
GALLERY_URL = 3
WATCHABLE_URL = 4
UNKNOWN_URL = 5
@enum.unique
class ImportStatus(StringableIntEnum):
IMPORTABLE = 0
SUCCESS = 1
EXISTS = 2
PREVIOUSLY_DELETED = 3
FAILED = 4
VETOED = 7
@enum.unique
class TagAction(StringableIntEnum):
ADD = 0
DELETE = 1
PEND = 2
RESCIND_PENDING = 3
PETITION = 4
RESCIND_PETITION = 5
@enum.unique
class TagStatus(StringableIntEnum):
CURRENT = 0
PENDING = 1
DELETED = 2
PETITIONED = 3
@enum.unique
class PageType(StringableIntEnum):
GALLERY_DOWNLOADER = 1
SIMPLE_DOWNLOADER = 2
HARD_DRIVE_IMPORT = 3
PETITIONS = 5
FILE_SEARCH = 6
URL_DOWNLOADER = 7
DUPLICATES = 8
THREAD_WATCHER = 9
PAGE_OF_PAGES = 10
@enum.unique
class FileSortType(StringableIntEnum):
FILE_SIZE = 0
DURATION = 1
IMPORT_TIME = 2
FILE_TYPE = 3
RANDOM = 4
WIDTH = 5
HEIGHT = 6
RATIO = 7
NUMBER_OF_PIXELS = 8
NUMBER_OF_TAGS = 9
NUMBER_OF_MEDIA_VIEWS = 10
TOTAL_MEDIA_VIEWTIME = 11
APPROXIMATE_BITRATE = 12
HAS_AUDIO = 13
MODIFIED_TIME = 14
FRAMERATE = 15
NUMBER_OF_FRAMES = 16
class BinaryFileLike(T.Protocol):
def read(self):
...
# The client should accept all objects that either support the iterable or mapping protocol. We must ensure that objects
# are either lists or dicts, so Python's json module can handle them
class JSONEncoder(json.JSONEncoder):
def default(self, object_: T.Any):
if isinstance(object_, abc.Mapping):
return dict(object_)
if isinstance(object_, abc.Iterable):
return list(object_)
return super().default(object_)
class Client:
VERSION = 31
# Access Management
_GET_API_VERSION_PATH = "/api_version"
_REQUEST_NEW_PERMISSIONS_PATH = "/request_new_permissions"
_GET_SESSION_KEY_PATH = "/session_key"
_VERIFY_ACCESS_KEY_PATH = "/verify_access_key"
_GET_SERVICES_PATH = "/get_services"
# Adding Files
_ADD_FILE_PATH = "/add_files/add_file"
_DELETE_FILES_PATH = "/add_files/delete_files"
_UNDELETE_FILES_PATH = "/add_files/undelete_files"
_ARCHIVE_FILES_PATH = "/add_files/archive_files"
_UNARCHIVE_FILES_PATH = "/add_files/unarchive_files"
# Adding Tags
_CLEAN_TAGS_PATH = "/add_tags/clean_tags"
_SEARCH_TAGS_PATH = "/add_tags/search_tags"
_ADD_TAGS_PATH = "/add_tags/add_tags"
# Adding URLs
_GET_URL_FILES_PATH = "/add_urls/get_url_files"
_GET_URL_INFO_PATH = "/add_urls/get_url_info"
_ADD_URL_PATH = "/add_urls/add_url"
_ASSOCIATE_URL_PATH = "/add_urls/associate_url"
# Adding Notes
_SET_NOTES_PATH = "/add_notes/set_notes"
_DELETE_NOTES_PATH = "/add_notes/delete_notes"
# Managing Cookies and HTTP Headers
_GET_COOKIES_PATH = "/manage_cookies/get_cookies"
_SET_COOKIES_PATH = "/manage_cookies/set_cookies"
_SET_USER_AGENT_PATH = "/manage_headers/set_user_agent"
# Managing Pages
_GET_PAGES_PATH = "/manage_pages/get_pages"
_GET_PAGE_INFO_PATH = "/manage_pages/get_page_info"
_ADD_FILES_TO_PAGE_PATH = "/manage_pages/add_files"
_FOCUS_PAGE_PATH = "/manage_pages/focus_page"
# Searching and Fetching Files
_SEARCH_FILES_PATH = "/get_files/search_files"
_GET_FILE_METADATA_PATH = "/get_files/file_metadata"
_GET_FILE_PATH = "/get_files/file"
_GET_THUMBNAIL_PATH = "/get_files/thumbnail"
# Managing the Database
_LOCK_DATABASE_PATH = "/manage_database/lock_on"
_UNLOCK_DATABASE_PATH = "/manage_database/lock_off"
_MR_BONES_PATH = "/manage_database/mr_bones"
def __init__(
self,
access_key = None,
api_url: str = DEFAULT_API_URL,
session = None,
):
"""
See https://hydrusnetwork.github.io/hydrus/help/client_api.html for documentation.
"""
self.access_key = access_key
self.api_url = api_url.rstrip("/")
self.session = session or requests.Session()
def _api_request(self, method: str, path: str, **kwargs: T.Any):
if self.access_key is not None:
kwargs.setdefault("headers", {}).update({"Hydrus-Client-API-Access-Key": self.access_key})
# Make sure we use our custom JSONEncoder that can serialize all objects that implement the iterable or mapping
# protocol
json_data = kwargs.pop("json", None)
if json_data is not None:
kwargs["data"] = json.dumps(json_data, cls=JSONEncoder)
# Since we aren't using the json keyword-argument, we have to set the Content-Type manually
kwargs["headers"]["Content-Type"] = "application/json"
try:
response = self.session.request(method, self.api_url + path, **kwargs)
except requests.RequestException as error:
# Re-raise connection and timeout errors as hydrus.ConnectionErrors so these are more easy to handle for
# client applications
raise ConnectionError(*error.args)
try:
response.raise_for_status()
except requests.HTTPError:
if response.status_code == requests.codes.bad_request:
raise MissingParameter(response)
elif response.status_code in {
requests.codes.unauthorized,
requests.codes.forbidden,
AUTHENTICATION_TIMEOUT_CODE,
}:
raise InsufficientAccess(response)
elif response.status_code == requests.codes.service_unavailable:
raise DatabaseLocked(response)
elif response.status_code == requests.codes.server_error:
raise ServerError(response)
raise APIError(response)
return response
def get_api_version(self):
response = self._api_request("GET", self._GET_API_VERSION_PATH)
return response.json()
def request_new_permissions(self, name, permissions):
response = self._api_request(
"GET",
self._REQUEST_NEW_PERMISSIONS_PATH,
params={"name": name, "basic_permissions": json.dumps(permissions, cls=JSONEncoder)},
)
return response.json()["access_key"]
def get_session_key(self):
response = self._api_request("GET", self._GET_SESSION_KEY_PATH)
return response.json()["session_key"]
def verify_access_key(self):
response = self._api_request("GET", self._VERIFY_ACCESS_KEY_PATH)
return response.json()
def get_services(self):
response = self._api_request("GET", self._GET_SERVICES_PATH)
return response.json()
def add_file(self, path_or_file: T.Union[str, os.PathLike, BinaryFileLike]):
if isinstance(path_or_file, (str, os.PathLike)):
response = self._api_request("POST", self._ADD_FILE_PATH, json={"path": os.fspath(path_or_file)})
else:
response = self._api_request(
"POST",
self._ADD_FILE_PATH,
data=path_or_file.read(),
headers={"Content-Type": "application/octet-stream"},
)
return response.json()
def delete_files(
self,
hashes = None,
file_ids = None,
file_service_name = None,
file_service_key = None,
reason = None
):
if hashes is None and file_ids is None:
raise ValueError("At least one of hashes, file_ids is required")
if file_service_name is not None and file_service_key is not None:
raise ValueError("Exactly one of file_service_name, file_service_key is required")
payload: dict[str, T.Any] = {}
if hashes is not None:
payload["hashes"] = hashes
if file_ids is not None:
payload["file_ids"] = file_ids
if file_service_name is not None:
payload["file_service_name"] = file_service_name
if file_service_key is not None:
payload["file_service_key"] = file_service_key
if reason is not None:
payload["reason"] = reason
self._api_request("POST", self._DELETE_FILES_PATH, json=payload)
def undelete_files(
self,
hashes = None,
file_ids = None,
file_service_name = None,
file_service_key = None,
):
if hashes is None and file_ids is None:
raise ValueError("At least one of hashes, file_ids is required")
if file_service_name is not None and file_service_key is not None:
raise ValueError("Exactly one of file_service_name, file_service_key is required")
payload: dict[str, T.Any] = {}
if hashes is not None:
payload["hashes"] = hashes
if file_ids is not None:
payload["file_ids"] = file_ids
if file_service_name is not None:
payload["file_service_name"] = file_service_name
if file_service_key is not None:
payload["file_service_key"] = file_service_key
self._api_request("POST", self._UNDELETE_FILES_PATH, json=payload)
def archive_files(
self,
hashes = None,
file_ids = None
):
if hashes is None and file_ids is None:
raise ValueError("At least one of hashes, file_ids is required")
payload: dict[str, T.Any] = {}
if hashes is not None:
payload["hashes"] = hashes
if file_ids is not None:
payload["file_ids"] = file_ids
self._api_request("POST", self._ARCHIVE_FILES_PATH, json=payload)
def unarchive_files(
self,
hashes = None,
file_ids = None
):
if hashes is None and file_ids is None:
raise ValueError("At least one of hashes, file_ids is required")
payload: dict[str, T.Any] = {}
if hashes is not None:
payload["hashes"] = hashes
if file_ids is not None:
payload["file_ids"] = file_ids
self._api_request("POST", self._UNARCHIVE_FILES_PATH, json=payload)
def clean_tags(self, tags ):
response = self._api_request("GET", self._CLEAN_TAGS_PATH, params={"tags": json.dumps(tags, cls=JSONEncoder)})
return response.json()["tags"]
def search_tags(
self,
search: str,
tag_service_key = None,
tag_service_name = None
):
if tag_service_name is not None and tag_service_key is not None:
raise ValueError("Exactly one of tag_service_name, tag_service_key is required")
payload: dict[str, T.Any] = {"search": search}
if tag_service_key is not None:
payload["tag_service_key"] = tag_service_key
if tag_service_name is not None:
payload["tag_service_name"] = tag_service_name
response = self._api_request("GET", self._SEARCH_TAGS_PATH, params=payload)
return response.json()["tags"]
def add_tags(
self,
hashes = None,
file_ids = None,
service_names_to_tags = None,
service_keys_to_tags = None,
service_names_to_actions_to_tags = None,
service_keys_to_actions_to_tags = None,
):
if hashes is None and file_ids is None:
raise ValueError("At least one of hashes, file_ids is required")
if (
service_names_to_tags is None
and service_keys_to_tags is None
and service_names_to_actions_to_tags is None
and service_keys_to_actions_to_tags is None
):
raise ValueError(
"At least one of service_names_to_tags, service_keys_to_tags, service_names_to_actions_to_tags or "
"service_keys_to_actions_to_tags is required"
)
payload: dict[str, T.Any] = {}
if hashes is not None:
payload["hashes"] = hashes
if file_ids is not None:
payload["file_ids"] = file_ids
if service_names_to_tags is not None:
payload["service_names_to_tags"] = service_names_to_tags
if service_keys_to_tags is not None:
payload["service_keys_to_tags"] = service_keys_to_tags
if service_names_to_actions_to_tags is not None:
payload["service_names_to_actions_to_tags"] = service_names_to_actions_to_tags
if service_keys_to_actions_to_tags is not None:
payload["service_keys_to_actions_to_tags"] = service_keys_to_actions_to_tags
self._api_request("POST", self._ADD_TAGS_PATH, json=payload)
def get_url_files(self, url: str):
response = self._api_request("GET", self._GET_URL_FILES_PATH, params={"url": url})
return response.json()
def get_url_info(self, url: str):
response = self._api_request("GET", self._GET_URL_INFO_PATH, params={"url": url})
return response.json()
def add_url(
self,
url: str,
destination_page_key = None,
destination_page_name = None,
show_destination_page = None,
service_names_to_additional_tags = None,
service_keys_to_additional_tags = None,
filterable_tags = None,
):
if destination_page_key is not None and destination_page_name is not None:
raise ValueError("Exactly one of destination_page_key, destination_page_name is required")
payload: dict[str, T.Any] = {"url": url}
if destination_page_key is not None:
payload["destination_page_key"] = destination_page_key
if destination_page_name is not None:
payload["destination_page_name"] = destination_page_name
if show_destination_page is not None:
payload["show_destination_page"] = show_destination_page
if service_names_to_additional_tags is not None:
payload["service_names_to_additional_tags"] = service_names_to_additional_tags
if service_keys_to_additional_tags is not None:
payload["service_keys_to_additional_tags"] = service_keys_to_additional_tags
if filterable_tags is not None:
payload["filterable_tags"] = filterable_tags
response = self._api_request("POST", self._ADD_URL_PATH, json=payload)
return response.json()
def associate_url(
self,
hashes = None,
file_ids = None,
urls_to_add = None,
urls_to_delete = None,
):
if hashes is None and file_ids is None:
raise ValueError("At least one of hashes, file_ids is required")
if urls_to_add is None and urls_to_delete is None:
raise ValueError("At least one of urls_to_add, urls_to_delete is required")
payload: dict[str, T.Any] = {}
if hashes is not None:
payload["hashes"] = hashes
if file_ids is not None:
payload["file_ids"] = file_ids
if urls_to_add is not None:
urls_to_add = urls_to_add
payload["urls_to_add"] = urls_to_add
if urls_to_delete is not None:
urls_to_delete = urls_to_delete
payload["urls_to_delete"] = urls_to_delete
self._api_request("POST", self._ASSOCIATE_URL_PATH, json=payload)
def set_notes(self, notes , hash_= None, file_id = None):
if (hash_ is None and file_id is None) or (hash_ is not None and file_id is not None):
raise ValueError("Exactly one of hash_, file_id is required")
payload: dict[str, T.Any] = {"notes": notes}
if hash_ is not None:
payload["hash"] = hash_
if file_id is not None:
payload["file_id"] = file_id
self._api_request("POST", self._SET_NOTES_PATH, json=payload)
def delete_notes(
self,
note_names ,
hash_ = None,
file_id = None
):
if (hash_ is None and file_id is None) or (hash_ is not None and file_id is not None):
raise ValueError("Exactly one of hash_, file_id is required")
payload: dict[str, T.Any] = {"note_names": note_names}
if hash_ is not None:
payload["hash"] = hash_
if file_id is not None:
payload["file_id"] = file_id
self._api_request("POST", self._DELETE_NOTES_PATH, json=payload)
def get_cookies(self, domain: str):
response = self._api_request("GET", self._GET_COOKIES_PATH, params={"domain": domain})
return response.json()["cookies"]
def set_cookies(self, cookies ):
self._api_request("POST", self._SET_COOKIES_PATH, json={"cookies": cookies})
def set_user_agent(self, user_agent: str):
self._api_request("POST", self._SET_USER_AGENT_PATH, json={"user-agent": user_agent})
def get_pages(self):
response = self._api_request("GET", self._GET_PAGES_PATH)
return response.json()["pages"]
def get_page_info(self, page_key: str, simple = None):
parameters = {"page_key": page_key}
if simple is not None:
parameters["simple"] = json.dumps(simple, cls=JSONEncoder)
response = self._api_request("GET", self._GET_PAGE_INFO_PATH, params=parameters)
return response.json()["page_info"]
def add_files_to_page(
self,
page_key: str,
file_ids = None,
hashes = None
):
if file_ids is None and hashes is None:
raise ValueError("At least one of file_ids, hashes is required")
payload: dict[str, T.Any] = {"page_key": page_key}
if file_ids is not None:
payload["file_ids"] = file_ids
if hashes is not None:
payload["hashes"] = hashes
self._api_request("POST", self._ADD_FILES_TO_PAGE_PATH, json=payload)
def focus_page(self, page_key: str):
self._api_request("POST", self._FOCUS_PAGE_PATH, json={"page_key": page_key})
def search_files(
self,
tags,
file_service_name = None,
file_service_key = None,
tag_service_name = None,
tag_service_key = None,
file_sort_type = None,
file_sort_asc = None,
return_hashes = None,
):
if file_service_name is not None and file_service_key is not None:
raise ValueError("Exactly one of file_service_name, file_service_key is required")
if tag_service_name is not None and tag_service_key is not None:
raise ValueError("Exactly one of tag_service_name, tag_service_key is required")
parameters: dict[str, T.Union[str, int]] = {"tags": json.dumps(tags, cls=JSONEncoder)}
if file_service_name is not None:
parameters["file_service_name"] = file_service_name
if file_service_key is not None:
parameters["file_service_key"] = file_service_key
if tag_service_name is not None:
parameters["tag_service_name"] = tag_service_name
if tag_service_key is not None:
parameters["tag_service_key"] = tag_service_key
if file_sort_type is not None:
parameters["file_sort_type"] = file_sort_type
if file_sort_asc is not None:
parameters["file_sort_asc"] = json.dumps(file_sort_asc, cls=JSONEncoder)
if return_hashes is not None:
parameters["return_hashes"] = json.dumps(return_hashes, cls=JSONEncoder)
response = self._api_request("GET", self._SEARCH_FILES_PATH, params=parameters)
return response.json()["hashes" if return_hashes else "file_ids"]
def get_file_metadata(
self,
hashes = None,
file_ids = None,
create_new_file_ids = None,
only_return_identifiers = None,
only_return_basic_information = None,
detailed_url_information = None,
hide_service_name_tags = None,
include_notes = None
):
if hashes is None and file_ids is None:
raise ValueError("At least one of hashes, file_ids is required")
parameters = {}
if hashes is not None:
parameters["hashes"] = json.dumps(hashes, cls=JSONEncoder)
if file_ids is not None:
parameters["file_ids"] = json.dumps(file_ids, cls=JSONEncoder)
if create_new_file_ids is not None:
parameters["create_new_file_ids"] = json.dumps(create_new_file_ids, cls=JSONEncoder)
if only_return_identifiers is not None:
parameters["only_return_identifiers"] = json.dumps(only_return_identifiers, cls=JSONEncoder)
if only_return_basic_information is not None:
parameters["only_return_basic_information"] = json.dumps(only_return_basic_information, cls=JSONEncoder)
if detailed_url_information is not None:
parameters["detailed_url_information"] = json.dumps(detailed_url_information, cls=JSONEncoder)
if hide_service_name_tags is not None:
parameters["hide_service_name_tags"] = json.dumps(hide_service_name_tags, cls=JSONEncoder)
if include_notes is not None:
parameters["include_notes"] = json.dumps(include_notes, cls=JSONEncoder)
response = self._api_request("GET", self._GET_FILE_METADATA_PATH, params=parameters)
return response.json()["metadata"]
def get_file(self, hash_ = None, file_id = None):
if (hash_ is None and file_id is None) or (hash_ is not None and file_id is not None):
raise ValueError("Exactly one of hash_, file_id is required")
parameters: dict[str, T.Union[str, int]] = {}
if hash_ is not None:
parameters["hash"] = hash_
if file_id is not None:
parameters["file_id"] = file_id
return self._api_request("GET", self._GET_FILE_PATH, params=parameters, stream=True)
def get_thumbnail(self, hash_ = None, file_id = None):
if (hash_ is None and file_id is None) or (hash_ is not None and file_id is not None):
raise ValueError("Exactly one of hash_, file_id is required")
parameters: dict[str, T.Union[str, int]] = {}
if hash_ is not None:
parameters["hash"] = hash_
if file_id is not None:
parameters["file_id"] = file_id
return self._api_request("GET", self._GET_THUMBNAIL_PATH, params=parameters, stream=True)
def lock_database(self):
self._api_request("POST", self._LOCK_DATABASE_PATH)
def unlock_database(self):
self._api_request("POST", self._UNLOCK_DATABASE_PATH)
def get_mr_bones(self):
return self._api_request("GET", self._MR_BONES_PATH).json()["boned_stats"]
def add_and_tag_files(
self,
paths_or_files,
tags ,
service_names = None,
service_keys = None,
):
"""Convenience method to add and tag multiple files at the same time.
If service_names and service_keys aren't specified, the default service name "my tags" will be used. If a file
already exists in Hydrus, it will also be tagged.
Returns:
list[dict[str, T.Any]]: Returns results of all `Client.add_file()` calls, matching the order of the
paths_or_files iterable
"""
if service_names is None and service_keys is None:
service_names = ("my tags",)
results = []
hashes = set()
for path_or_file in paths_or_files:
result = self.add_file(path_or_file)
results.append(result)
if result["status"] != ImportStatus.FAILED:
hashes.add(result["hash"])
service_names_to_tags = {name: tags for name in service_names} if service_names is not None else None
service_keys_to_tags = {key: tags for key in service_keys} if service_keys is not None else None
# Ignore type, we know that hashes only contains strings
self.add_tags(hashes, service_names_to_tags=service_names_to_tags, service_keys_to_tags=service_keys_to_tags) # type: ignore
return results
def get_page_list(self):
"""Convenience method that returns a flattened version of the page tree from `Client.get_pages()`.
Returns:
list[dict[str, T.Any]]: A list of every "pages" value in the page tree in pre-order (NLR)
"""
tree = self.get_pages()
pages = []
def walk_tree(page: dict[str, T.Any]):
pages.append(page)
# Ignore type, we know that pages is always a list
for sub_page in page.get("pages", ()): # type: ignore
# Ignore type, we know that sub_page is always a dict
walk_tree(sub_page) # type: ignore
walk_tree(tree)
return pages
__all__ = [
"__version__",
"DEFAULT_API_URL",
"HYDRUS_METADATA_ENCODING",
"HydrusAPIException",
"ConnectionError",
"APIError",
"MissingParameter",
"InsufficientAccess",
"DatabaseLocked",
"ServerError",
"Permission",
"URLType",
"ImportStatus",
"TagAction",
"TagStatus",
"PageType",
"FileSortType",
"Client",
]

102
scripts/hydrus_api/utils.py Normal file
View File

@ -0,0 +1,102 @@
# Copyright (C) 2021 cryzed
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as
# published by the Free Software Foundation, either version 3 of the
# License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
#
# You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
import collections
import os
import typing as T
from collections import abc
from hydrus_api import DEFAULT_API_URL, HYDRUS_METADATA_ENCODING, Client, Permission
X = T.TypeVar("X")
class TextFileLike(T.Protocol):
def read(self) -> str:
pass
def verify_permissions(
client: Client, permissions: abc.Iterable[T.Union[int, Permission]], exact: bool = False
) -> bool:
granted_permissions = set(client.verify_access_key()["basic_permissions"])
return granted_permissions == set(permissions) if exact else granted_permissions.issuperset(permissions)
def cli_request_api_key(
name: str,
permissions: abc.Iterable[T.Union[int, Permission]],
verify: bool = True,
exact: bool = False,
api_url: str = DEFAULT_API_URL,
) -> str:
while True:
input(
'Navigate to "services->review services->local->client api" in the Hydrus client and click "add->from api '
'request". Then press enter to continue...'
)
access_key = Client(api_url=api_url).request_new_permissions(name, permissions)
input("Press OK and then apply in the Hydrus client dialog. Then press enter to continue...")
client = Client(access_key, api_url)
if verify and not verify_permissions(client, permissions, exact):
granted = client.verify_access_key()["basic_permissions"]
print(
f"The granted permissions ({granted}) differ from the requested permissions ({permissions}), please "
"grant all requested permissions."
)
continue
return access_key
def parse_hydrus_metadata(text: str) -> collections.defaultdict[T.Optional[str], set[str]]:
namespaces = collections.defaultdict(set)
for line in (line.strip() for line in text.splitlines()):
if not line:
continue
parts = line.split(":", 1)
namespace, tag = (None, line) if len(parts) == 1 else parts
namespaces[namespace].add(tag)
# Ignore type, mypy has trouble figuring out that tag isn't optional
return namespaces # type: ignore
def parse_hydrus_metadata_file(
path_or_file: T.Union[str, os.PathLike, TextFileLike]
) -> collections.defaultdict[T.Optional[str], set[str]]:
if isinstance(path_or_file, (str, os.PathLike)):
with open(path_or_file, encoding=HYDRUS_METADATA_ENCODING) as file:
return parse_hydrus_metadata(file.read())
return parse_hydrus_metadata(path_or_file.read())
# Useful for splitting up requests to get_file_metadata()
def yield_chunks(sequence: T.Sequence[X], chunk_size: int, offset: int = 0) -> T.Generator[T.Sequence[X], None, None]:
while offset < len(sequence):
yield sequence[offset : offset + chunk_size]
offset += chunk_size
__all__ = [
"verify_permissions",
"cli_request_api_key",
"parse_hydrus_metadata",
"parse_hydrus_metadata_file",
"yield_chunks",
]

View File

@ -1,6 +1,6 @@
# This file is part of stable-diffusion-webui (https://github.com/sd-webui/stable-diffusion-webui/).
# This file is part of sygil-webui (https://github.com/Sygil-Dev/sygil-webui/).
# Copyright 2022 sd-webui team.
# Copyright 2022 Sygil-Dev team.
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
@ -12,7 +12,7 @@
# GNU Affero General Public License for more details.
# You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
# along with this program. If not, see <http://www.gnu.org/licenses/>.
# base webui import and utils.
from sd_utils import *
@ -30,12 +30,18 @@ import torch
import skimage
from ldm.models.diffusion.ddim import DDIMSampler
from ldm.models.diffusion.plms import PLMSSampler
# Temp imports
# streamlit components
from custom_components import sygil_suggestions
from streamlit_drawable_canvas import st_canvas
# Temp imports
# end of imports
#---------------------------------------------------------------------------------------------------------------
sygil_suggestions.init()
try:
# this silences the annoying "Some weights of the model checkpoint were not used when initializing..." message at start.
@ -45,11 +51,11 @@ try:
except:
pass
def img2img(prompt: str = '', init_info: any = None, init_info_mask: any = None, mask_mode: int = 0, mask_blur_strength: int = 3,
def img2img(prompt: str = '', init_info: any = None, init_info_mask: any = None, mask_mode: int = 0, mask_blur_strength: int = 3,
mask_restore: bool = False, ddim_steps: int = 50, sampler_name: str = 'DDIM',
n_iter: int = 1, cfg_scale: float = 7.5, denoising_strength: float = 0.8,
seed: int = -1, noise_mode: int = 0, find_noise_steps: str = "", height: int = 512, width: int = 512, resize_mode: int = 0, fp = None,
variant_amount: float = None, variant_seed: int = None, ddim_eta:float = 0.0,
variant_amount: float = 0.0, variant_seed: int = None, ddim_eta:float = 0.0,
write_info_files:bool = True, separate_prompts:bool = False, normalize_prompt_weights:bool = True,
save_individual_images: bool = True, save_grid: bool = True, group_by_prompt: bool = True,
save_as_jpg: bool = True, use_GFPGAN: bool = True, GFPGAN_model: str = 'GFPGANv1.4',
@ -202,7 +208,7 @@ def img2img(prompt: str = '', init_info: any = None, init_info_mask: any = None,
samples_ddim = K.sampling.__dict__[f'sample_{sampler.get_sampler_name()}'](model_wrap_cfg, xi, sigma_sched,
extra_args={'cond': conditioning, 'uncond': unconditional_conditioning,
'cond_scale': cfg_scale, 'mask': z_mask, 'x0': x0, 'xi': xi}, disable=False,
callback=generation_callback)
callback=generation_callback if not server_state["bridge"] else None)
else:
x0, z_mask = init_data
@ -234,7 +240,7 @@ def img2img(prompt: str = '', init_info: any = None, init_info_mask: any = None,
from skimage import exposure
do_color_correction = True
except:
print("Install scikit-image to perform color correction on loopback")
logger.error("Install scikit-image to perform color correction on loopback")
for i in range(n_iter):
if do_color_correction and i == 0:
@ -356,28 +362,30 @@ def img2img(prompt: str = '', init_info: any = None, init_info_mask: any = None,
del sampler
return output_images, seed, info, stats
#
def layout():
with st.form("img2img-inputs"):
st.session_state["generation_mode"] = "img2img"
img2img_input_col, img2img_generate_col = st.columns([10,1])
with img2img_input_col:
#prompt = st.text_area("Input Text","")
prompt = st.text_input("Input Text","", placeholder="A corgi wearing a top hat as an oil painting.")
placeholder = "A corgi wearing a top hat as an oil painting."
prompt = st.text_area("Input Text","", placeholder=placeholder, height=54)
sygil_suggestions.suggestion_area(placeholder)
# Every form must have a submit button, the extra blank spaces is a temp way to align it with the input field. Needs to be done in CSS or some other way.
img2img_generate_col.write("")
img2img_generate_col.write("")
generate_button = img2img_generate_col.form_submit_button("Generate")
# creating the page layout using columns
col1_img2img_layout, col2_img2img_layout, col3_img2img_layout = st.columns([1,2,2], gap="small")
col1_img2img_layout, col2_img2img_layout, col3_img2img_layout = st.columns([2,4,4], gap="medium")
with col1_img2img_layout:
# If we have custom models available on the "models/custom"
# If we have custom models available on the "models/custom"
#folder then we show a menu to select which model we want to use, otherwise we use the main model for SD
custom_models_available()
if server_state["CustomModel_available"]:
@ -386,71 +394,90 @@ def layout():
help="Select the model you want to use. This option is only available if you have custom models \
on your 'models/custom' folder. The model name that will be shown here is the same as the name\
the file for the model has on said folder, it is recommended to give the .ckpt file a name that \
will make it easier for you to distinguish it from other models. Default: Stable Diffusion v1.4")
will make it easier for you to distinguish it from other models. Default: Stable Diffusion v1.5")
else:
st.session_state["custom_model"] = "Stable Diffusion v1.4"
st.session_state["sampling_steps"] = st.slider("Sampling Steps", value=st.session_state['defaults'].img2img.sampling_steps.value,
min_value=st.session_state['defaults'].img2img.sampling_steps.min_value,
max_value=st.session_state['defaults'].img2img.sampling_steps.max_value,
step=st.session_state['defaults'].img2img.sampling_steps.step)
st.session_state["custom_model"] = "Stable Diffusion v1.5"
st.session_state["sampling_steps"] = st.number_input("Sampling Steps", value=st.session_state['defaults'].img2img.sampling_steps.value,
min_value=st.session_state['defaults'].img2img.sampling_steps.min_value,
step=st.session_state['defaults'].img2img.sampling_steps.step)
sampler_name_list = ["k_lms", "k_euler", "k_euler_a", "k_dpm_2", "k_dpm_2_a", "k_heun", "PLMS", "DDIM"]
st.session_state["sampler_name"] = st.selectbox("Sampling method",sampler_name_list,
index=sampler_name_list.index(st.session_state['defaults'].img2img.sampler_name), help="Sampling method to use.")
st.session_state["sampler_name"] = st.selectbox("Sampling method",sampler_name_list,
index=sampler_name_list.index(st.session_state['defaults'].img2img.sampler_name), help="Sampling method to use.")
width = st.slider("Width:", min_value=st.session_state['defaults'].img2img.width.min_value, max_value=st.session_state['defaults'].img2img.width.max_value,
value=st.session_state['defaults'].img2img.width.value, step=st.session_state['defaults'].img2img.width.step)
height = st.slider("Height:", min_value=st.session_state['defaults'].img2img.height.min_value, max_value=st.session_state['defaults'].img2img.height.max_value,
value=st.session_state['defaults'].img2img.height.value, step=st.session_state['defaults'].img2img.height.step)
seed = st.text_input("Seed:", value=st.session_state['defaults'].img2img.seed, help=" The seed to use, if left blank a random seed will be generated.")
cfg_scale = st.slider("CFG (Classifier Free Guidance Scale):", min_value=st.session_state['defaults'].img2img.cfg_scale.min_value,
max_value=st.session_state['defaults'].img2img.cfg_scale.max_value, value=st.session_state['defaults'].img2img.cfg_scale.value,
step=st.session_state['defaults'].img2img.cfg_scale.step, help="How strongly the image should follow the prompt.")
st.session_state["denoising_strength"] = st.slider("Denoising Strength:", value=st.session_state['defaults'].img2img.denoising_strength.value,
min_value=st.session_state['defaults'].img2img.denoising_strength.min_value,
max_value=st.session_state['defaults'].img2img.denoising_strength.max_value,
step=st.session_state['defaults'].img2img.denoising_strength.step)
seed = st.text_input("Seed:", value=st.session_state['defaults'].img2img.seed, help=" The seed to use, if left blank a random seed will be generated.")
cfg_scale = st.number_input("CFG (Classifier Free Guidance Scale):", min_value=st.session_state['defaults'].img2img.cfg_scale.min_value,
value=st.session_state['defaults'].img2img.cfg_scale.value,
step=st.session_state['defaults'].img2img.cfg_scale.step,
help="How strongly the image should follow the prompt.")
st.session_state["denoising_strength"] = st.slider("Denoising Strength:", value=st.session_state['defaults'].img2img.denoising_strength.value,
min_value=st.session_state['defaults'].img2img.denoising_strength.min_value,
max_value=st.session_state['defaults'].img2img.denoising_strength.max_value,
step=st.session_state['defaults'].img2img.denoising_strength.step)
mask_expander = st.empty()
with mask_expander.expander("Mask"):
mask_mode_list = ["Mask", "Inverted mask", "Image alpha"]
mask_mode = st.selectbox("Mask Mode", mask_mode_list,
help="Select how you want your image to be masked.\"Mask\" modifies the image where the mask is white.\n\
\"Inverted mask\" modifies the image where the mask is black. \"Image alpha\" modifies the image where the image is transparent."
)
mask_mode = st.selectbox("Mask Mode", mask_mode_list, index=st.session_state["defaults"].img2img.mask_mode,
help="Select how you want your image to be masked.\"Mask\" modifies the image where the mask is white.\n\
\"Inverted mask\" modifies the image where the mask is black. \"Image alpha\" modifies the image where the image is transparent."
)
mask_mode = mask_mode_list.index(mask_mode)
noise_mode_list = ["Seed", "Find Noise", "Matched Noise", "Find+Matched Noise"]
noise_mode = st.selectbox(
"Noise Mode", noise_mode_list,
help=""
)
noise_mode = noise_mode_list.index(noise_mode)
find_noise_steps = st.slider("Find Noise Steps", value=st.session_state['defaults'].img2img.find_noise_steps.value,
min_value=st.session_state['defaults'].img2img.find_noise_steps.min_value, max_value=st.session_state['defaults'].img2img.find_noise_steps.max_value,
step=st.session_state['defaults'].img2img.find_noise_steps.step)
with st.expander("Batch Options"):
st.session_state["batch_count"] = int(st.text_input("Batch count.", value=st.session_state['defaults'].img2img.batch_count.value,
help="How many iterations or batches of images to generate in total."))
st.session_state["batch_size"] = int(st.text_input("Batch size", value=st.session_state.defaults.img2img.batch_size.value,
help="How many images are at once in a batch.\
It increases the VRAM usage a lot but if you have enough VRAM it can reduce the time it takes to finish generation as more images are generated at once.\
Default: 1"))
noise_mode_list = ["Seed", "Find Noise", "Matched Noise", "Find+Matched Noise"]
noise_mode = st.selectbox("Noise Mode", noise_mode_list, index=noise_mode_list.index(st.session_state['defaults'].img2img.noise_mode), help="")
#noise_mode = noise_mode_list.index(noise_mode)
find_noise_steps = st.number_input("Find Noise Steps", value=st.session_state['defaults'].img2img.find_noise_steps.value,
min_value=st.session_state['defaults'].img2img.find_noise_steps.min_value,
step=st.session_state['defaults'].img2img.find_noise_steps.step)
# Specify canvas parameters in application
drawing_mode = st.selectbox(
"Drawing tool:",
(
"freedraw",
"transform",
#"line",
"rect",
"circle",
#"polygon",
),
)
stroke_width = st.slider("Stroke width: ", 1, 100, 50)
stroke_color = st.color_picker("Stroke color hex: ", value="#EEEEEE")
bg_color = st.color_picker("Background color hex: ", "#7B6E6E")
display_toolbar = st.checkbox("Display toolbar", True)
#realtime_update = st.checkbox("Update in realtime", True)
with st.expander("Batch Options"):
st.session_state["batch_count"] = st.number_input("Batch count.", value=st.session_state['defaults'].img2img.batch_count.value,
help="How many iterations or batches of images to generate in total.")
st.session_state["batch_size"] = st.number_input("Batch size", value=st.session_state.defaults.img2img.batch_size.value,
help="How many images are at once in a batch.\
It increases the VRAM usage a lot but if you have enough VRAM it can reduce the time it takes to finish generation as more images are generated at once.\
Default: 1")
with st.expander("Preview Settings"):
st.session_state["update_preview"] = st.session_state["defaults"].general.update_preview
st.session_state["update_preview_frequency"] = st.text_input("Update Image Preview Frequency", value=st.session_state['defaults'].img2img.update_preview_frequency,
help="Frequency in steps at which the the preview image is updated. By default the frequency \
is set to 1 step.")
#
st.session_state["update_preview_frequency"] = st.number_input("Update Image Preview Frequency",
min_value=0,
value=st.session_state['defaults'].img2img.update_preview_frequency,
help="Frequency in steps at which the the preview image is updated. By default the frequency \
is set to 1 step.")
#
with st.expander("Advanced"):
with st.expander("Output Settings"):
separate_prompts = st.checkbox("Create Prompt Matrix.", value=st.session_state['defaults'].img2img.separate_prompts,
@ -468,21 +495,21 @@ def layout():
group_by_prompt = st.checkbox("Group results by prompt", value=st.session_state['defaults'].img2img.group_by_prompt,
help="Saves all the images with the same prompt into the same folder. \
When using a prompt matrix each prompt combination will have its own folder.")
write_info_files = st.checkbox("Write Info file", value=st.session_state['defaults'].img2img.write_info_files,
help="Save a file next to the image with informartion about the generation.")
write_info_files = st.checkbox("Write Info file", value=st.session_state['defaults'].img2img.write_info_files,
help="Save a file next to the image with informartion about the generation.")
save_as_jpg = st.checkbox("Save samples as jpg", value=st.session_state['defaults'].img2img.save_as_jpg, help="Saves the images as jpg instead of png.")
#
# check if GFPGAN, RealESRGAN and LDSR are available.
if "GFPGAN_available" not in st.session_state:
GFPGAN_available()
if "RealESRGAN_available" not in st.session_state:
RealESRGAN_available()
if "LDSR_available" not in st.session_state:
LDSR_available()
if st.session_state["GFPGAN_available"] or st.session_state["RealESRGAN_available"] or st.session_state["LDSR_available"]:
with st.expander("Post-Processing"):
face_restoration_tab, upscaling_tab = st.tabs(["Face Restoration", "Upscaling"])
@ -496,44 +523,46 @@ def layout():
help="Uses the GFPGAN model to improve faces after the generation.\
This greatly improve the quality and consistency of faces but uses\
extra VRAM. Disable if you need the extra VRAM.")
st.session_state["GFPGAN_model"] = st.selectbox("GFPGAN model", st.session_state["GFPGAN_models"],
index=st.session_state["GFPGAN_models"].index(st.session_state['defaults'].general.GFPGAN_model))
index=st.session_state["GFPGAN_models"].index(st.session_state['defaults'].general.GFPGAN_model))
#st.session_state["GFPGAN_strenght"] = st.slider("Effect Strenght", min_value=1, max_value=100, value=1, step=1, help='')
else:
st.session_state["use_GFPGAN"] = False
st.session_state["use_GFPGAN"] = False
with upscaling_tab:
st.session_state['us_upscaling'] = st.checkbox("Use Upscaling", value=st.session_state['defaults'].img2img.use_upscaling)
# RealESRGAN and LDSR used for upscaling.
# RealESRGAN and LDSR used for upscaling.
if st.session_state["RealESRGAN_available"] or st.session_state["LDSR_available"]:
upscaling_method_list = []
if st.session_state["RealESRGAN_available"]:
upscaling_method_list.append("RealESRGAN")
if st.session_state["LDSR_available"]:
upscaling_method_list.append("LDSR")
st.session_state["upscaling_method"] = st.selectbox("Upscaling Method", upscaling_method_list,
index=upscaling_method_list.index(st.session_state['defaults'].general.upscaling_method))
index=upscaling_method_list.index(st.session_state['defaults'].general.upscaling_method)
if st.session_state['defaults'].general.upscaling_method in upscaling_method_list
else 0)
if st.session_state["RealESRGAN_available"]:
with st.expander("RealESRGAN"):
if st.session_state["upscaling_method"] == "RealESRGAN" and st.session_state['us_upscaling']:
st.session_state["use_RealESRGAN"] = True
else:
st.session_state["use_RealESRGAN"] = False
st.session_state["RealESRGAN_model"] = st.selectbox("RealESRGAN model", st.session_state["RealESRGAN_models"],
index=st.session_state["RealESRGAN_models"].index(st.session_state['defaults'].general.RealESRGAN_model))
index=st.session_state["RealESRGAN_models"].index(st.session_state['defaults'].general.RealESRGAN_model))
else:
st.session_state["use_RealESRGAN"] = False
st.session_state["RealESRGAN_model"] = "RealESRGAN_x4plus"
#
if st.session_state["LDSR_available"]:
with st.expander("LDSR"):
@ -541,153 +570,163 @@ def layout():
st.session_state["use_LDSR"] = True
else:
st.session_state["use_LDSR"] = False
st.session_state["LDSR_model"] = st.selectbox("LDSR model", st.session_state["LDSR_models"],
index=st.session_state["LDSR_models"].index(st.session_state['defaults'].general.LDSR_model))
st.session_state["ldsr_sampling_steps"] = int(st.text_input("Sampling Steps", value=st.session_state['defaults'].img2img.LDSR_config.sampling_steps,
help=""))
st.session_state["preDownScale"] = int(st.text_input("PreDownScale", value=st.session_state['defaults'].img2img.LDSR_config.preDownScale,
help=""))
st.session_state["postDownScale"] = int(st.text_input("postDownScale", value=st.session_state['defaults'].img2img.LDSR_config.postDownScale,
help=""))
index=st.session_state["LDSR_models"].index(st.session_state['defaults'].general.LDSR_model))
st.session_state["ldsr_sampling_steps"] = st.number_input("Sampling Steps", value=st.session_state['defaults'].img2img.LDSR_config.sampling_steps,
help="")
st.session_state["preDownScale"] = st.number_input("PreDownScale", value=st.session_state['defaults'].img2img.LDSR_config.preDownScale,
help="")
st.session_state["postDownScale"] = st.number_input("postDownScale", value=st.session_state['defaults'].img2img.LDSR_config.postDownScale,
help="")
downsample_method_list = ['Nearest', 'Lanczos']
st.session_state["downsample_method"] = st.selectbox("Downsample Method", downsample_method_list,
index=downsample_method_list.index(st.session_state['defaults'].img2img.LDSR_config.downsample_method))
else:
st.session_state["use_LDSR"] = False
st.session_state["LDSR_model"] = "model"
st.session_state["LDSR_model"] = "model"
with st.expander("Variant"):
variant_amount = st.slider("Variant Amount:", value=st.session_state['defaults'].img2img.variant_amount, min_value=0.0, max_value=1.0, step=0.01)
variant_seed = st.text_input("Variant Seed:", value=st.session_state['defaults'].img2img.variant_seed,
help="The seed to use when generating a variant, if left blank a random seed will be generated.")
with col2_img2img_layout:
editor_tab = st.tabs(["Editor"])
editor_image = st.empty()
st.session_state["editor_image"] = editor_image
st.form_submit_button("Refresh")
#if "canvas" not in st.session_state:
st.session_state["canvas"] = st.empty()
masked_image_holder = st.empty()
image_holder = st.empty()
st.form_submit_button("Refresh")
uploaded_images = st.file_uploader(
"Upload Image", accept_multiple_files=False, type=["png", "jpg", "jpeg", "webp"],
"Upload Image", accept_multiple_files=False, type=["png", "jpg", "jpeg", "webp", 'jfif'],
help="Upload an image which will be used for the image to image generation.",
)
if uploaded_images:
image = Image.open(uploaded_images).convert('RGBA')
image = Image.open(uploaded_images).convert('RGB')
new_img = image.resize((width, height))
image_holder.image(new_img)
mask_holder = st.empty()
uploaded_masks = st.file_uploader(
"Upload Mask", accept_multiple_files=False, type=["png", "jpg", "jpeg", "webp"],
help="Upload an mask image which will be used for masking the image to image generation.",
)
if uploaded_masks:
mask_expander.expander("Mask", expanded=True)
mask = Image.open(uploaded_masks)
if mask.mode == "RGBA":
mask = mask.convert('RGBA')
background = Image.new('RGBA', mask.size, (0, 0, 0))
mask = Image.alpha_composite(background, mask)
mask = mask.resize((width, height))
mask_holder.image(mask)
if uploaded_images and uploaded_masks:
if mask_mode != 2:
final_img = new_img.copy()
alpha_layer = mask.convert('L')
strength = st.session_state["denoising_strength"]
if mask_mode == 0:
alpha_layer = ImageOps.invert(alpha_layer)
alpha_layer = alpha_layer.point(lambda a: a * strength)
alpha_layer = ImageOps.invert(alpha_layer)
elif mask_mode == 1:
alpha_layer = alpha_layer.point(lambda a: a * strength)
alpha_layer = ImageOps.invert(alpha_layer)
final_img.putalpha(alpha_layer)
with masked_image_holder.container():
st.text("Masked Image Preview")
st.image(final_img)
#image_holder.image(new_img)
#mask_holder = st.empty()
#uploaded_masks = st.file_uploader(
#"Upload Mask", accept_multiple_files=False, type=["png", "jpg", "jpeg", "webp", 'jfif'],
#help="Upload an mask image which will be used for masking the image to image generation.",
#)
#
# Create a canvas component
with st.session_state["canvas"]:
st.session_state["uploaded_masks"] = st_canvas(
fill_color="rgba(255, 165, 0, 0.3)", # Fixed fill color with some opacity
stroke_width=stroke_width,
stroke_color=stroke_color,
background_color=bg_color,
background_image=image if uploaded_images else None,
update_streamlit=True,
width=width,
height=height,
drawing_mode=drawing_mode,
initial_drawing=st.session_state["uploaded_masks"].json_data if "uploaded_masks" in st.session_state else None,
display_toolbar= display_toolbar,
key="full_app",
)
#try:
##print (type(st.session_state["uploaded_masks"]))
#if st.session_state["uploaded_masks"] != None:
#mask_expander.expander("Mask", expanded=True)
#mask = Image.fromarray(st.session_state["uploaded_masks"].image_data)
#st.image(mask)
#if mask.mode == "RGBA":
#mask = mask.convert('RGBA')
#background = Image.new('RGBA', mask.size, (0, 0, 0))
#mask = Image.alpha_composite(background, mask)
#mask = mask.resize((width, height))
#except AttributeError:
#pass
with col3_img2img_layout:
result_tab = st.tabs(["Result"])
# create an empty container for the image, progress bar, etc so we can update it later and use session_state to hold them globally.
preview_image = st.empty()
st.session_state["preview_image"] = preview_image
#st.session_state["loading"] = st.empty()
st.session_state["progress_bar_text"] = st.empty()
st.session_state["progress_bar"] = st.empty()
message = st.empty()
#if uploaded_images:
#image = Image.open(uploaded_images).convert('RGB')
##img_array = np.array(image) # if you want to pass it to OpenCV
#new_img = image.resize((width, height))
#st.image(new_img, use_column_width=True)
if generate_button:
#print("Loading models")
# load the models when we hit the generate button for the first time, it wont be loaded after that so dont worry.
with col3_img2img_layout:
with hc.HyLoader('Loading Models...', hc.Loaders.standard_loaders,index=[0]):
load_models(use_LDSR=st.session_state["use_LDSR"], LDSR_model=st.session_state["LDSR_model"],
use_GFPGAN=st.session_state["use_GFPGAN"], GFPGAN_model=st.session_state["GFPGAN_model"] ,
use_RealESRGAN=st.session_state["use_RealESRGAN"], RealESRGAN_model=st.session_state["RealESRGAN_model"],
CustomModel_available=server_state["CustomModel_available"], custom_model=st.session_state["custom_model"])
use_GFPGAN=st.session_state["use_GFPGAN"], GFPGAN_model=st.session_state["GFPGAN_model"] ,
use_RealESRGAN=st.session_state["use_RealESRGAN"], RealESRGAN_model=st.session_state["RealESRGAN_model"],
CustomModel_available=server_state["CustomModel_available"], custom_model=st.session_state["custom_model"])
if uploaded_images:
image = Image.open(uploaded_images).convert('RGBA')
new_img = image.resize((width, height))
#img_array = np.array(image) # if you want to pass it to OpenCV
#image = Image.fromarray(image).convert('RGBA')
#new_img = image.resize((width, height))
###img_array = np.array(image) # if you want to pass it to OpenCV
#image_holder.image(new_img)
new_mask = None
if uploaded_masks:
mask = Image.open(uploaded_masks).convert('RGBA')
if st.session_state["uploaded_masks"]:
mask = Image.fromarray(st.session_state["uploaded_masks"].image_data)
new_mask = mask.resize((width, height))
#masked_image_holder.image(new_mask)
try:
output_images, seed, info, stats = img2img(prompt=prompt, init_info=new_img, init_info_mask=new_mask, mask_mode=mask_mode,
mask_restore=img2img_mask_restore, ddim_steps=st.session_state["sampling_steps"],
sampler_name=st.session_state["sampler_name"], n_iter=st.session_state["batch_count"],
cfg_scale=cfg_scale, denoising_strength=st.session_state["denoising_strength"], variant_seed=variant_seed,
seed=seed, noise_mode=noise_mode, find_noise_steps=find_noise_steps, width=width,
height=height, variant_amount=variant_amount,
seed=seed, noise_mode=noise_mode, find_noise_steps=find_noise_steps, width=width,
height=height, variant_amount=variant_amount,
ddim_eta=st.session_state.defaults.img2img.ddim_eta, write_info_files=write_info_files,
separate_prompts=separate_prompts, normalize_prompt_weights=normalize_prompt_weights,
save_individual_images=save_individual_images, save_grid=save_grid,
save_individual_images=save_individual_images, save_grid=save_grid,
group_by_prompt=group_by_prompt, save_as_jpg=save_as_jpg, use_GFPGAN=st.session_state["use_GFPGAN"],
GFPGAN_model=st.session_state["GFPGAN_model"],
use_RealESRGAN=st.session_state["use_RealESRGAN"], RealESRGAN_model=st.session_state["RealESRGAN_model"],
use_LDSR=st.session_state["use_LDSR"], LDSR_model=st.session_state["LDSR_model"],
loopback=loopback
)
#show a message when the generation is complete.
message.success('Render Complete: ' + info + '; Stats: ' + stats, icon="")
except (StopException, KeyError):
print(f"Received Streamlit StopException")
logger.info(f"Received Streamlit StopException")
# this will render all the images at the end of the generation but its better if its moved to a second tab inside col2 and shown as a gallery.
# use the current col2 first tab to show the preview_img and update it as its generated.
#preview_image.image(output_images, width=750)

View File

@ -1,6 +1,6 @@
# This file is part of stable-diffusion-webui (https://github.com/sd-webui/stable-diffusion-webui/).
# This file is part of sygil-webui (https://github.com/Sygil-Dev/sygil-webui/).
# Copyright 2022 sd-webui team.
# Copyright 2022 Sygil-Dev team.
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
@ -18,7 +18,7 @@
"""
CLIP Interrogator made by @pharmapsychotic modified to work with our WebUI.
# CLIP Interrogator by @pharmapsychotic
# CLIP Interrogator by @pharmapsychotic
Twitter: https://twitter.com/pharmapsychotic
Github: https://github.com/pharmapsychotic/clip-interrogator
@ -54,92 +54,46 @@ from PIL import Image
from torchvision import transforms
from torchvision.transforms.functional import InterpolationMode
from ldm.models.blip import blip_decoder
#import hashlib
# end of imports
# ---------------------------------------------------------------------------------------------------------------
device = torch.device('cuda:0' if torch.cuda.is_available() else 'cpu')
blip_image_eval_size = 512
#blip_model_url = 'https://storage.googleapis.com/sfr-vision-language-research/BLIP/models/model*_base_caption.pth'
server_state["clip_models"] = {}
server_state["preprocesses"] = {}
st.session_state["log"] = []
def load_blip_model():
print("Loading BLIP Model")
st.session_state["log_message"].code("Loading BLIP Model", language='')
logger.info("Loading BLIP Model")
if "log" not in st.session_state:
st.session_state["log"] = []
st.session_state["log"].append("Loading BLIP Model")
st.session_state["log_message"].code('\n'.join(st.session_state["log"]), language='')
if "blip_model" not in server_state:
with server_state_lock['blip_model']:
server_state["blip_model"] = blip_decoder(pretrained="models/blip/model__base_caption.pth",
image_size=blip_image_eval_size, vit='base', med_config="configs/blip/med_config.json")
server_state["blip_model"] = server_state["blip_model"].eval()
#if not st.session_state["defaults"].general.optimized:
server_state["blip_model"] = server_state["blip_model"].to(device).half()
print("BLIP Model Loaded")
st.session_state["log_message"].code("BLIP Model Loaded", language='')
logger.info("BLIP Model Loaded")
st.session_state["log"].append("BLIP Model Loaded")
st.session_state["log_message"].code('\n'.join(st.session_state["log"]), language='')
else:
print("BLIP Model already loaded")
st.session_state["log_message"].code("BLIP Model Already Loaded", language='')
logger.info("BLIP Model already loaded")
st.session_state["log"].append("BLIP Model already loaded")
st.session_state["log_message"].code('\n'.join(st.session_state["log"]), language='')
#return server_state["blip_model"]
#
def artstation_links():
"""Find and save every artstation link for the first 500 pages of the explore page."""
# collecting links to the list()
links = []
with open('data/img2txt/artstation_links.txt', 'w') as f:
for page_num in range(1,500):
response = requests.get(f'https://www.artstation.com/api/v2/community/explore/projects/trending.json?page={page_num}&dimension=all&per_page=100').text
# open json response
data = json.loads(response)
# loopinh through json response
for result in data['data']:
# still looping and grabbing url's
url = result['url']
links.append(url)
# writing each link on the new line (\n)
f.write(f'{url}\n')
return links
#
def artstation_users():
"""Get all the usernames and full name of the users on the first 500 pages of artstation explore page."""
# collect username and full name
artists = []
# opening a .txt file
with open('data/img2txt/artstation_artists.txt', 'w') as f:
for page_num in range(1,500):
response = requests.get(f'https://www.artstation.com/api/v2/community/explore/projects/trending.json?page={page_num}&dimension=all&per_page=100').text
# open json response
data = json.loads(response)
# loopinh through json response
for item in data['data']:
#print (item['user'])
username = item['user']['username']
full_name = item['user']['full_name']
# still looping and grabbing url's
artists.append(username)
artists.append(full_name)
# writing each link on the new line (\n)
f.write(f'{slugify(username)}\n')
f.write(f'{slugify(full_name)}\n')
return artists
def generate_caption(pil_image):
load_blip_model()
gpu_image = transforms.Compose([ # type: ignore
transforms.Resize((blip_image_eval_size, blip_image_eval_size), interpolation=InterpolationMode.BICUBIC), # type: ignore
transforms.ToTensor(), # type: ignore
@ -149,7 +103,6 @@ def generate_caption(pil_image):
with torch.no_grad():
caption = server_state["blip_model"].generate(gpu_image, sample=False, num_beams=3, max_length=20, min_length=5)
#print (caption)
return caption[0]
def load_list(filename):
@ -188,40 +141,38 @@ def batch_rank(model, image_features, text_array, batch_size=st.session_state["d
return ranks
def interrogate(image, models):
#server_state["blip_model"] =
load_blip_model()
print("Generating Caption")
st.session_state["log_message"].code("Generating Caption", language='')
logger.info("Generating Caption")
st.session_state["log"].append("Generating Caption")
st.session_state["log_message"].code('\n'.join(st.session_state["log"]), language='')
caption = generate_caption(image)
if st.session_state["defaults"].general.optimized:
del server_state["blip_model"]
clear_cuda()
print("Caption Generated")
st.session_state["log_message"].code("Caption Generated", language='')
logger.info("Caption Generated")
st.session_state["log"].append("Caption Generated")
st.session_state["log_message"].code('\n'.join(st.session_state["log"]), language='')
if len(models) == 0:
print(f"\n\n{caption}")
logger.info(f"\n\n{caption}")
return
table = []
bests = [[('', 0)]]*5
bests = [[('', 0)]]*7
logger.info("Ranking Text")
st.session_state["log"].append("Ranking Text")
st.session_state["log_message"].code('\n'.join(st.session_state["log"]), language='')
print("Ranking Text")
#if "clip_model" in server_state:
#print (server_state["clip_model"])
#print (st.session_state["log_message"])
for model_name in models:
with torch.no_grad(), torch.autocast('cuda', dtype=torch.float16):
print(f"Interrogating with {model_name}...")
st.session_state["log_message"].code(f"Interrogating with {model_name}...", language='')
logger.info(f"Interrogating with {model_name}...")
st.session_state["log"].append(f"Interrogating with {model_name}...")
st.session_state["log_message"].code('\n'.join(st.session_state["log"]), language='')
if model_name not in server_state["clip_models"]:
if not st.session_state["defaults"].img2txt.keep_all_models_loaded:
model_to_delete = []
@ -233,38 +184,48 @@ def interrogate(image, models):
del server_state["preprocesses"][model]
clear_cuda()
if model_name == 'ViT-H-14':
server_state["clip_models"][model_name], _, server_state["preprocesses"][model_name] = open_clip.create_model_and_transforms(model_name, pretrained='laion2b_s32b_b79k', cache_dir='models/clip')
server_state["clip_models"][model_name], _, server_state["preprocesses"][model_name] = \
open_clip.create_model_and_transforms(model_name, pretrained='laion2b_s32b_b79k', cache_dir='models/clip')
elif model_name == 'ViT-g-14':
server_state["clip_models"][model_name], _, server_state["preprocesses"][model_name] = open_clip.create_model_and_transforms(model_name, pretrained='laion2b_s12b_b42k', cache_dir='models/clip')
server_state["clip_models"][model_name], _, server_state["preprocesses"][model_name] = \
open_clip.create_model_and_transforms(model_name, pretrained='laion2b_s12b_b42k', cache_dir='models/clip')
else:
server_state["clip_models"][model_name], server_state["preprocesses"][model_name] = clip.load(model_name, device=device, download_root='models/clip')
server_state["clip_models"][model_name], server_state["preprocesses"][model_name] = \
clip.load(model_name, device=device, download_root='models/clip')
server_state["clip_models"][model_name] = server_state["clip_models"][model_name].cuda().eval()
images = server_state["preprocesses"][model_name](image).unsqueeze(0).cuda()
image_features = server_state["clip_models"][model_name].encode_image(images).float()
image_features /= image_features.norm(dim=-1, keepdim=True)
if st.session_state["defaults"].general.optimized:
clear_cuda()
ranks = []
ranks.append(batch_rank(server_state["clip_models"][model_name], image_features, server_state["mediums"]))
ranks.append(batch_rank(server_state["clip_models"][model_name], image_features, ["by "+artist for artist in server_state["artists"]]))
ranks.append(batch_rank(server_state["clip_models"][model_name], image_features, server_state["trending_list"]))
ranks.append(batch_rank(server_state["clip_models"][model_name], image_features, server_state["movements"]))
ranks.append(batch_rank(server_state["clip_models"][model_name], image_features, server_state["flavors"]))
#ranks.append(batch_rank(server_state["clip_models"][model_name], image_features, server_state["domains"]))
#ranks.append(batch_rank(server_state["clip_models"][model_name], image_features, server_state["subreddits"]))
ranks.append(batch_rank(server_state["clip_models"][model_name], image_features, server_state["techniques"]))
ranks.append(batch_rank(server_state["clip_models"][model_name], image_features, server_state["tags"]))
# ranks.append(batch_rank(server_state["clip_models"][model_name], image_features, server_state["genres"]))
# ranks.append(batch_rank(server_state["clip_models"][model_name], image_features, server_state["styles"]))
# ranks.append(batch_rank(server_state["clip_models"][model_name], image_features, server_state["techniques"]))
# ranks.append(batch_rank(server_state["clip_models"][model_name], image_features, server_state["subjects"]))
# ranks.append(batch_rank(server_state["clip_models"][model_name], image_features, server_state["colors"]))
# ranks.append(batch_rank(server_state["clip_models"][model_name], image_features, server_state["moods"]))
# ranks.append(batch_rank(server_state["clip_models"][model_name], image_features, server_state["themes"]))
# ranks.append(batch_rank(server_state["clip_models"][model_name], image_features, server_state["keywords"]))
#print (bests)
#print (ranks)
for i in range(len(ranks)):
confidence_sum = 0
for ci in range(len(ranks[i])):
@ -272,55 +233,53 @@ def interrogate(image, models):
if confidence_sum > sum(bests[i][t][1] for t in range(len(bests[i]))):
bests[i] = ranks[i]
for best in bests:
best.sort(key=lambda x: x[1], reverse=True)
# prune to 3
best = best[:3]
row = [model_name]
for r in ranks:
row.append(', '.join([f"{x[0]} ({x[1]:0.1f}%)" for x in r]))
#for rank in ranks:
# rank.sort(key=lambda x: x[1], reverse=True)
# row.append(f'{rank[0][0]} {rank[0][1]:.2f}%')
table.append(row)
if st.session_state["defaults"].general.optimized:
del server_state["clip_models"][model_name]
gc.collect()
# for i in range(len(st.session_state["uploaded_image"])):
st.session_state["prediction_table"][st.session_state["processed_image_count"]].dataframe(pd.DataFrame(
table, columns=["Model", "Medium", "Artist", "Trending", "Movement", "Flavors"]))
table, columns=["Model", "Medium", "Artist", "Trending", "Movement", "Flavors", "Techniques", "Tags"]))
flaves = ', '.join([f"{x[0]}" for x in bests[4]])
medium = bests[0][0][0]
artist = bests[1][0][0]
trending = bests[2][0][0]
movement = bests[3][0][0]
flavors = bests[4][0][0]
#domains = bests[5][0][0]
#subreddits = bests[6][0][0]
techniques = bests[5][0][0]
tags = bests[6][0][0]
if caption.startswith(medium):
st.session_state["text_result"][st.session_state["processed_image_count"]].code(
f"\n\n{caption} {bests[1][0][0]}, {bests[2][0][0]}, {bests[3][0][0]}, {flaves}", language="")
f"\n\n{caption} {artist}, {trending}, {movement}, {techniques}, {flavors}, {tags}", language="")
else:
st.session_state["text_result"][st.session_state["processed_image_count"]].code(
f"\n\n{caption}, {medium} {bests[1][0][0]}, {bests[2][0][0]}, {bests[3][0][0]}, {flaves}", language="")
f"\n\n{caption}, {medium} {artist}, {trending}, {movement}, {techniques}, {flavors}, {tags}", language="")
#
print("Finished Interrogating.")
st.session_state["log_message"].code("Finished Interrogating.", language="")
#
logger.info("Finished Interrogating.")
st.session_state["log"].append("Finished Interrogating.")
st.session_state["log_message"].code('\n'.join(st.session_state["log"]), language='')
def img2txt():
data_path = "data/"
server_state["artists"] = load_list(os.path.join(data_path, 'img2txt', 'artists.txt'))
server_state["flavors"] = load_list(os.path.join(data_path, 'img2txt', 'flavors.txt'))
server_state["mediums"] = load_list(os.path.join(data_path, 'img2txt', 'mediums.txt'))
server_state["movements"] = load_list(os.path.join(data_path, 'img2txt', 'movements.txt'))
server_state["sites"] = load_list(os.path.join(data_path, 'img2txt', 'sites.txt'))
# server_state["genres"] = load_list(os.path.join(data_path, 'img2txt', 'genres.txt'))
# server_state["styles"] = load_list(os.path.join(data_path, 'img2txt', 'styles.txt'))
# server_state["techniques"] = load_list(os.path.join(data_path, 'img2txt', 'techniques.txt'))
# server_state["subjects"] = load_list(os.path.join(data_path, 'img2txt', 'subjects.txt'))
server_state["trending_list"] = [site for site in server_state["sites"]]
server_state["trending_list"].extend(["trending on "+site for site in server_state["sites"]])
server_state["trending_list"].extend(["featured on "+site for site in server_state["sites"]])
server_state["trending_list"].extend([site+" contest winner" for site in server_state["sites"]])
#image_path_or_url = "https://i.redd.it/e2e8gimigjq91.jpg"
models = []
if st.session_state["ViT-L/14"]:
@ -330,6 +289,24 @@ def img2txt():
if st.session_state["ViT-g-14"]:
models.append('ViT-g-14')
if st.session_state["ViTB32"]:
models.append('ViT-B/32')
if st.session_state['ViTB16']:
models.append('ViT-B/16')
if st.session_state["ViTL14_336px"]:
models.append('ViT-L/14@336px')
if st.session_state["RN101"]:
models.append('RN101')
if st.session_state["RN50"]:
models.append('RN50')
if st.session_state["RN50x4"]:
models.append('RN50x4')
if st.session_state["RN50x16"]:
models.append('RN50x16')
if st.session_state["RN50x64"]:
models.append('RN50x64')
# if str(image_path_or_url).startswith('http://') or str(image_path_or_url).startswith('https://'):
#image = Image.open(requests.get(image_path_or_url, stream=True).raw).convert('RGB')
# else:
@ -352,7 +329,36 @@ def img2txt():
def layout():
#set_page_title("Image-to-Text - Stable Diffusion WebUI")
#st.info("Under Construction. :construction_worker:")
#
if "clip_models" not in server_state:
server_state["clip_models"] = {}
if "preprocesses" not in server_state:
server_state["preprocesses"] = {}
data_path = "data/"
if "artists" not in server_state:
server_state["artists"] = load_list(os.path.join(data_path, 'img2txt', 'artists.txt'))
if "flavors" not in server_state:
server_state["flavors"] = random.choices(load_list(os.path.join(data_path, 'img2txt', 'flavors.txt')), k=2000)
if "mediums" not in server_state:
server_state["mediums"] = load_list(os.path.join(data_path, 'img2txt', 'mediums.txt'))
if "movements" not in server_state:
server_state["movements"] = load_list(os.path.join(data_path, 'img2txt', 'movements.txt'))
if "sites" not in server_state:
server_state["sites"] = load_list(os.path.join(data_path, 'img2txt', 'sites.txt'))
#server_state["domains"] = load_list(os.path.join(data_path, 'img2txt', 'domains.txt'))
#server_state["subreddits"] = load_list(os.path.join(data_path, 'img2txt', 'subreddits.txt'))
if "techniques" not in server_state:
server_state["techniques"] = load_list(os.path.join(data_path, 'img2txt', 'techniques.txt'))
if "tags" not in server_state:
server_state["tags"] = load_list(os.path.join(data_path, 'img2txt', 'tags.txt'))
#server_state["genres"] = load_list(os.path.join(data_path, 'img2txt', 'genres.txt'))
# server_state["styles"] = load_list(os.path.join(data_path, 'img2txt', 'styles.txt'))
# server_state["subjects"] = load_list(os.path.join(data_path, 'img2txt', 'subjects.txt'))
if "trending_list" not in server_state:
server_state["trending_list"] = [site for site in server_state["sites"]]
server_state["trending_list"].extend(["trending on "+site for site in server_state["sites"]])
server_state["trending_list"].extend(["featured on "+site for site in server_state["sites"]])
server_state["trending_list"].extend([site+" contest winner" for site in server_state["sites"]])
with st.form("img2txt-inputs"):
st.session_state["generation_mode"] = "img2txt"
@ -361,16 +367,27 @@ def layout():
col1, col2 = st.columns([1, 4], gap="large")
with col1:
#url = st.text_area("Input Text","")
#url = st.text_input("Input Text","", placeholder="A corgi wearing a top hat as an oil painting.")
#st.subheader("Input Image")
st.session_state["uploaded_image"] = st.file_uploader('Input Image', type=['png', 'jpg', 'jpeg'], accept_multiple_files=True)
st.session_state["uploaded_image"] = st.file_uploader('Input Image', type=['png', 'jpg', 'jpeg', 'jfif', 'webp'], accept_multiple_files=True)
with st.expander("CLIP models", expanded=True):
st.session_state["ViT-L/14"] = st.checkbox("ViT-L/14", value=True, help="ViT-L/14 model.")
st.session_state["ViT-H-14"] = st.checkbox("ViT-H-14", value=False, help="ViT-H-14 model.")
st.session_state["ViT-g-14"] = st.checkbox("ViT-g-14", value=False, help="ViT-g-14 model.")
with st.expander("Others"):
st.info("For DiscoDiffusion and JAX enable all the same models here as you intend to use when generating your images.")
st.session_state["ViTL14_336px"] = st.checkbox("ViTL14_336px", value=False, help="ViTL14_336px model.")
st.session_state["ViTB16"] = st.checkbox("ViTB16", value=False, help="ViTB16 model.")
st.session_state["ViTB32"] = st.checkbox("ViTB32", value=False, help="ViTB32 model.")
st.session_state["RN50"] = st.checkbox("RN50", value=False, help="RN50 model.")
st.session_state["RN50x4"] = st.checkbox("RN50x4", value=False, help="RN50x4 model.")
st.session_state["RN50x16"] = st.checkbox("RN50x16", value=False, help="RN50x16 model.")
st.session_state["RN50x64"] = st.checkbox("RN50x64", value=False, help="RN50x64 model.")
st.session_state["RN101"] = st.checkbox("RN101", value=False, help="RN101 model.")
#
# st.subheader("Logs:")
@ -380,7 +397,9 @@ def layout():
with col2:
st.subheader("Image")
refresh = st.form_submit_button("Refresh", help='Refresh the image preview to show your uploaded image instead of the default placeholder.')
image_col1, image_col2 = st.columns([10,25])
with image_col1:
refresh = st.form_submit_button("Update Preview Image", help='Refresh the image preview to show your uploaded image instead of the default placeholder.')
if st.session_state["uploaded_image"]:
#print (type(st.session_state["uploaded_image"]))
@ -419,11 +438,12 @@ def layout():
#st.session_state["input_image_preview"].code('', language="")
st.image("images/streamlit/img2txt_placeholder.png", clamp=True)
#
# Every form must have a submit button, the extra blank spaces is a temp way to align it with the input field. Needs to be done in CSS or some other way.
# generate_col1.title("")
# generate_col1.title("")
generate_button = st.form_submit_button("Generate!")
with image_col2:
#
# Every form must have a submit button, the extra blank spaces is a temp way to align it with the input field. Needs to be done in CSS or some other way.
# generate_col1.title("")
# generate_col1.title("")
generate_button = st.form_submit_button("Generate!", help="Start interrogating the images to generate a prompt from each of the selected images")
if generate_button:
# if model, pipe, RealESRGAN or GFPGAN is in st.session_state remove the model and pipe form session_state so that they are reloaded.

View File

@ -1,6 +1,6 @@
# This file is part of stable-diffusion-webui (https://github.com/sd-webui/stable-diffusion-webui/).
# This file is part of sygil-webui (https://github.com/Sygil-Dev/sygil-webui/).
# Copyright 2022 sd-webui team.
# Copyright 2022 Sygil-Dev team.
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as published by
# the Free Software Foundation, either version 3 of the License, or

View File

@ -70,15 +70,19 @@ genfmt = "<level>{level: <10}</level> @ <green>{time:YYYY-MM-DD HH:mm:ss}</green
initfmt = "<magenta>INIT </magenta> | <level>{extra[status]: <10}</level> | <magenta>{message}</magenta>"
msgfmt = "<level>{level: <10}</level> | <level>{message}</level>"
logger.level("GENERATION", no=24, color="<cyan>")
logger.level("PROMPT", no=23, color="<yellow>")
logger.level("INIT", no=31, color="<white>")
logger.level("INIT_OK", no=31, color="<green>")
logger.level("INIT_WARN", no=31, color="<yellow>")
logger.level("INIT_ERR", no=31, color="<red>")
# Messages contain important information without which this application might not be able to be used
# As such, they have the highest priority
logger.level("MESSAGE", no=61, color="<green>")
try:
logger.level("GENERATION", no=24, color="<cyan>")
logger.level("PROMPT", no=23, color="<yellow>")
logger.level("INIT", no=31, color="<white>")
logger.level("INIT_OK", no=31, color="<green>")
logger.level("INIT_WARN", no=31, color="<yellow>")
logger.level("INIT_ERR", no=31, color="<red>")
# Messages contain important information without which this application might not be able to be used
# As such, they have the highest priority
logger.level("MESSAGE", no=61, color="<green>")
except TypeError:
pass
logger.__class__.generation = partialmethod(logger.__class__.log, "GENERATION")
logger.__class__.prompt = partialmethod(logger.__class__.log, "PROMPT")
@ -97,3 +101,5 @@ config = {
],
}
logger.configure(**config)
logger.add("logs/log_{time:MM-DD-YYYY!UTC}.log", rotation="8 MB", compression="zip", level='INFO') # Once the file is too old, it's rotated

View File

@ -1,6 +1,6 @@
# This file is part of stable-diffusion-webui (https://github.com/sd-webui/stable-diffusion-webui/).
# This file is part of sygil-webui (https://github.com/Sygil-Dev/sygil-webui/).
# Copyright 2022 sd-webui team.
# Copyright 2022 Sygil-Dev team.
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as published by
# the Free Software Foundation, either version 3 of the License, or

View File

@ -0,0 +1,34 @@
# This file is part of sygil-webui (https://github.com/Sygil-Dev/sandbox-webui/).
# Copyright 2022 Sygil-Dev team.
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
# You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
# base webui import and utils.
#from sd_utils import *
from sd_utils import *
# streamlit imports
#streamlit components section
#other imports
import os, time, requests
import sys
# Temp imports
# end of imports
#---------------------------------------------------------------------------------------------------------------
def layout():
st.info("Under Construction. :construction_worker:")

View File

@ -1,6 +1,6 @@
# This file is part of stable-diffusion-webui (https://github.com/sd-webui/stable-diffusion-webui/).
# This file is part of sygil-webui (https://github.com/Sygil-Dev/sygil-webui/).
# Copyright 2022 sd-webui team.
# Copyright 2022 Sygil-Dev team.
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as published by
# the Free Software Foundation, either version 3 of the License, or

View File

@ -1,6 +1,6 @@
# This file is part of stable-diffusion-webui (https://github.com/sd-webui/stable-diffusion-webui/).
# This file is part of sygil-webui (https://github.com/Sygil-Dev/sygil-webui/).
# Copyright 2022 sd-webui team.
# Copyright 2022 Sygil-Dev team.
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
@ -238,7 +238,7 @@ def layout():
with st.container():
if downloaded_concepts_count == 0:
st.write("You don't have any concepts in your library ")
st.markdown("To add concepts to your library, download some from the [sd-concepts-library](https://github.com/sd-webui/sd-concepts-library) \
st.markdown("To add concepts to your library, download some from the [sd-concepts-library](https://github.com/Sygil-Dev/sd-concepts-library) \
repository and save the content of `sd-concepts-library` into ```./models/custom/sd-concepts-library``` or just create your own concepts :wink:.", unsafe_allow_html=False)
else:
if len(st.session_state["results"]) == 0:

View File

@ -1,6 +1,6 @@
# This file is part of stable-diffusion-webui (https://github.com/sd-webui/stable-diffusion-webui/).
# This file is part of sygil-webui (https://github.com/Sygil-Dev/sygil-webui/).
# Copyright 2022 sd-webui team.
# Copyright 2022 Sygil-Dev team.
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as published by
# the Free Software Foundation, either version 3 of the License, or

File diff suppressed because it is too large Load Diff

View File

@ -1,26 +1,144 @@
import gc
import inspect
import warnings
from typing import List, Optional, Union
from typing import Callable, List, Optional, Union
from pathlib import Path
from torchvision.transforms.functional import pil_to_tensor
import librosa
from PIL import Image
from torchvision.io import write_video
import numpy as np
import time
import json
import torch
from transformers import CLIPFeatureExtractor, CLIPTextModel, CLIPTokenizer
from diffusers.configuration_utils import FrozenDict
from diffusers.models import AutoencoderKL, UNet2DConditionModel
from diffusers.pipeline_utils import DiffusionPipeline
from diffusers.pipelines.stable_diffusion.safety_checker import StableDiffusionSafetyChecker
from diffusers.utils import deprecate, logging
from diffusers.schedulers import DDIMScheduler, LMSDiscreteScheduler, PNDMScheduler
from diffusers import StableDiffusionPipelineOutput
#from diffusers.safety_checker import StableDiffusionSafetyChecker
from diffusers.pipelines.stable_diffusion import StableDiffusionPipelineOutput
from transformers import CLIPFeatureExtractor, CLIPTextModel, CLIPTokenizer
from torch import nn
from sd_utils import RealESRGANModel
logger = logging.get_logger(__name__) # pylint: disable=invalid-name
class StableDiffusionPipeline(DiffusionPipeline):
def get_timesteps_arr(audio_filepath, offset, duration, fps=30, margin=1.0, smooth=0.0):
y, sr = librosa.load(audio_filepath, offset=offset, duration=duration)
# librosa.stft hardcoded defaults...
# n_fft defaults to 2048
# hop length is win_length // 4
# win_length defaults to n_fft
D = librosa.stft(y, n_fft=2048, hop_length=2048 // 4, win_length=2048)
# Extract percussive elements
D_harmonic, D_percussive = librosa.decompose.hpss(D, margin=margin)
y_percussive = librosa.istft(D_percussive, length=len(y))
# Get normalized melspectrogram
spec_raw = librosa.feature.melspectrogram(y=y_percussive, sr=sr)
spec_max = np.amax(spec_raw, axis=0)
spec_norm = (spec_max - np.min(spec_max)) / np.ptp(spec_max)
# Resize cumsum of spec norm to our desired number of interpolation frames
x_norm = np.linspace(0, spec_norm.shape[-1], spec_norm.shape[-1])
y_norm = np.cumsum(spec_norm)
y_norm /= y_norm[-1]
x_resize = np.linspace(0, y_norm.shape[-1], int(duration*fps))
T = np.interp(x_resize, x_norm, y_norm)
# Apply smoothing
return T * (1 - smooth) + np.linspace(0.0, 1.0, T.shape[0]) * smooth
def slerp(t, v0, v1, DOT_THRESHOLD=0.9995):
"""helper function to spherically interpolate two arrays v1 v2"""
if not isinstance(v0, np.ndarray):
inputs_are_torch = True
input_device = v0.device
v0 = v0.cpu().numpy()
v1 = v1.cpu().numpy()
dot = np.sum(v0 * v1 / (np.linalg.norm(v0) * np.linalg.norm(v1)))
if np.abs(dot) > DOT_THRESHOLD:
v2 = (1 - t) * v0 + t * v1
else:
theta_0 = np.arccos(dot)
sin_theta_0 = np.sin(theta_0)
theta_t = theta_0 * t
sin_theta_t = np.sin(theta_t)
s0 = np.sin(theta_0 - theta_t) / sin_theta_0
s1 = sin_theta_t / sin_theta_0
v2 = s0 * v0 + s1 * v1
if inputs_are_torch:
v2 = torch.from_numpy(v2).to(input_device)
return v2
def make_video_pyav(
frames_or_frame_dir: Union[str, Path, torch.Tensor],
audio_filepath: Union[str, Path] = None,
fps: int = 30,
audio_offset: int = 0,
audio_duration: int = 2,
sr: int = 22050,
output_filepath: Union[str, Path] = "output.mp4",
glob_pattern: str = "*.png",
):
"""
TODO - docstring here
frames_or_frame_dir: (Union[str, Path, torch.Tensor]):
Either a directory of images, or a tensor of shape (T, C, H, W) in range [0, 255].
"""
# Torchvision write_video doesn't support pathlib paths
output_filepath = str(output_filepath)
if isinstance(frames_or_frame_dir, (str, Path)):
frames = None
for img in sorted(Path(frames_or_frame_dir).glob(glob_pattern)):
frame = pil_to_tensor(Image.open(img)).unsqueeze(0)
frames = frame if frames is None else torch.cat([frames, frame])
else:
frames = frames_or_frame_dir
# TCHW -> THWC
frames = frames.permute(0, 2, 3, 1)
if audio_filepath:
# Read audio, convert to tensor
audio, sr = librosa.load(audio_filepath, sr=sr, mono=True, offset=audio_offset, duration=audio_duration)
audio_tensor = torch.tensor(audio).unsqueeze(0)
write_video(
output_filepath,
frames,
fps=fps,
audio_array=audio_tensor,
audio_fps=sr,
audio_codec="aac",
options={"crf": "10", "pix_fmt": "yuv420p"},
)
else:
write_video(output_filepath, frames, fps=fps, options={"crf": "10", "pix_fmt": "yuv420p"})
return output_filepath
class StableDiffusionWalkPipeline(DiffusionPipeline):
r"""
Pipeline for text-to-image generation using Stable Diffusion.
Pipeline for generating videos by interpolating Stable Diffusion's latent space.
This model inherits from [`DiffusionPipeline`]. Check the superclass documentation for the generic methods the
library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.)
Args:
vae ([`AutoencoderKL`]):
Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations.
@ -35,6 +153,11 @@ class StableDiffusionPipeline(DiffusionPipeline):
scheduler ([`SchedulerMixin`]):
A scheduler to be used in combination with `unet` to denoise the encoded image latens. Can be one of
[`DDIMScheduler`], [`LMSDiscreteScheduler`], or [`PNDMScheduler`].
safety_checker ([`StableDiffusionSafetyChecker`]):
Classification module that estimates whether generated images could be considered offensive or harmful.
Please, refer to the [model card](https://huggingface.co/CompVis/stable-diffusion-v1-4) for details.
feature_extractor ([`CLIPFeatureExtractor`]):
Model that extracts features from generated images to be used as inputs for the `safety_checker`.
"""
def __init__(
@ -43,10 +166,36 @@ class StableDiffusionPipeline(DiffusionPipeline):
text_encoder: CLIPTextModel,
tokenizer: CLIPTokenizer,
unet: UNet2DConditionModel,
scheduler: Union[DDIMScheduler, PNDMScheduler, LMSDiscreteScheduler]
scheduler: Union[DDIMScheduler, PNDMScheduler, LMSDiscreteScheduler],
safety_checker: StableDiffusionSafetyChecker,
feature_extractor: CLIPFeatureExtractor,
):
super().__init__()
scheduler = scheduler.set_format("pt")
if hasattr(scheduler.config, "steps_offset") and scheduler.config.steps_offset != 1:
deprecation_message = (
f"The configuration file of this scheduler: {scheduler} is outdated. `steps_offset`"
f" should be set to 1 instead of {scheduler.config.steps_offset}. Please make sure "
"to update the config accordingly as leaving `steps_offset` might led to incorrect results"
" in future versions. If you have downloaded this checkpoint from the Hugging Face Hub,"
" it would be very nice if you could open a Pull request for the `scheduler/scheduler_config.json`"
" file"
)
deprecate("steps_offset!=1", "1.0.0", deprecation_message, standard_warn=False)
new_config = dict(scheduler.config)
new_config["steps_offset"] = 1
scheduler._internal_dict = FrozenDict(new_config)
if safety_checker is None:
logger.warn(
f"You have disabled the safety checker for {self.__class__} by passing `safety_checker=None`. Ensure"
" that you abide to the conditions of the Stable Diffusion license and do not expose unfiltered"
" results in services or applications open to the public. Both the diffusers team and Hugging Face"
" strongly recommend to keep the safety filter enabled in all public facing circumstances, disabling"
" it only for use-cases that involve analyzing network behavior or auditing its results. For more"
" information, please have a look at https://github.com/huggingface/diffusers/pull/254 ."
)
self.register_modules(
vae=vae,
text_encoder=text_encoder,
@ -60,10 +209,8 @@ class StableDiffusionPipeline(DiffusionPipeline):
def enable_attention_slicing(self, slice_size: Optional[Union[str, int]] = "auto"):
r"""
Enable sliced attention computation.
When this option is enabled, the attention module will split the input tensor in slices, to compute attention
in several steps. This is useful to save some memory in exchange for a small speed decrease.
Args:
slice_size (`str` or `int`, *optional*, defaults to `"auto"`):
When `"auto"`, halves the input to the attention heads, so attention will be computed in two steps. If
@ -84,35 +231,31 @@ class StableDiffusionPipeline(DiffusionPipeline):
# set slice_size = `None` to disable `attention slicing`
self.enable_attention_slicing(None)
def enable_minimal_memory_usage(self):
"""Moves only unet to fp16 and to CUDA, while keepping lighter models on CPUs"""
self.unet.to(torch.float16).to(torch.device("cuda"))
self.enable_attention_slicing(1)
torch.cuda.empty_cache()
gc.collect()
@torch.no_grad()
def __call__(
self,
prompt: Union[str, List[str]],
height: Optional[int] = 512,
width: Optional[int] = 512,
num_inference_steps: Optional[int] = 50,
guidance_scale: Optional[float] = 7.5,
eta: Optional[float] = 0.0,
prompt: Optional[Union[str, List[str]]] = None,
height: int = 512,
width: int = 512,
num_inference_steps: int = 50,
guidance_scale: float = 7.5,
negative_prompt: Optional[Union[str, List[str]]] = None,
num_images_per_prompt: Optional[int] = 1,
eta: float = 0.0,
generator: Optional[torch.Generator] = None,
latents: Optional[torch.FloatTensor] = None,
output_type: Optional[str] = "pil",
return_dict: bool = True,
callback: Optional[Callable[[int, int, torch.FloatTensor], None]] = None,
callback_steps: Optional[int] = 1,
text_embeddings: Optional[torch.FloatTensor] = None,
**kwargs,
):
r"""
Function invoked when calling the pipeline for generation.
Args:
prompt (`str` or `List[str]`):
The prompt or prompts to guide the image generation.
prompt (`str` or `List[str]`, *optional*, defaults to `None`):
The prompt or prompts to guide the image generation. If not provided, `text_embeddings` is required.
height (`int`, *optional*, defaults to 512):
The height in pixels of the generated image.
width (`int`, *optional*, defaults to 512):
@ -126,6 +269,11 @@ class StableDiffusionPipeline(DiffusionPipeline):
Paper](https://arxiv.org/pdf/2205.11487.pdf). Guidance scale is enabled by setting `guidance_scale >
1`. Higher guidance scale encourages to generate images that are closely linked to the text `prompt`,
usually at the expense of lower image quality.
negative_prompt (`str` or `List[str]`, *optional*):
The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored
if `guidance_scale` is less than `1`).
num_images_per_prompt (`int`, *optional*, defaults to 1):
The number of images to generate per prompt.
eta (`float`, *optional*, defaults to 0.0):
Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to
[`schedulers.DDIMScheduler`], will be ignored for others.
@ -138,11 +286,20 @@ class StableDiffusionPipeline(DiffusionPipeline):
tensor will ge generated by sampling using the supplied random `generator`.
output_type (`str`, *optional*, defaults to `"pil"`):
The output format of the generate image. Choose between
[PIL](https://pillow.readthedocs.io/en/stable/): `PIL.Image.Image` or `nd.array`.
[PIL](https://pillow.readthedocs.io/en/stable/): `PIL.Image.Image` or `np.array`.
return_dict (`bool`, *optional*, defaults to `True`):
Whether or not to return a [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] instead of a
plain tuple.
callback (`Callable`, *optional*):
A function that will be called every `callback_steps` steps during inference. The function will be
called with the following arguments: `callback(step: int, timestep: int, latents: torch.FloatTensor)`.
callback_steps (`int`, *optional*, defaults to 1):
The frequency at which the `callback` function will be called. If not specified, the callback will be
called at every step.
text_embeddings (`torch.FloatTensor`, *optional*, defaults to `None`):
Pre-generated text embeddings to be used as inputs for image generation. Can be used in place of
`prompt` to avoid re-computing the embeddings. If not provided, the embeddings will be generated from
the supplied `prompt`.
Returns:
[`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] or `tuple`:
[`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] if `return_dict` is True, otherwise a `tuple.
@ -151,37 +308,49 @@ class StableDiffusionPipeline(DiffusionPipeline):
(nsfw) content, according to the `safety_checker`.
"""
if "torch_device" in kwargs:
# device = kwargs.pop("torch_device")
warnings.warn(
"`torch_device` is deprecated as an input argument to `__call__` and will be removed in v0.3.0."
" Consider using `pipe.to(torch_device)` instead."
)
# Set device as before (to be removed in 0.3.0)
# if device is None:
# device = "cuda" if torch.cuda.is_available() else "cpu"
# self.to(device)
if isinstance(prompt, str):
batch_size = 1
elif isinstance(prompt, list):
batch_size = len(prompt)
else:
raise ValueError(f"`prompt` has to be of type `str` or `list` but is {type(prompt)}")
if height % 8 != 0 or width % 8 != 0:
raise ValueError(f"`height` and `width` have to be divisible by 8 but are {height} and {width}.")
# get prompt text embeddings
text_input = self.tokenizer(
prompt,
padding="max_length",
max_length=self.tokenizer.model_max_length,
truncation=True,
return_tensors="pt",
)
text_embeddings = self.text_encoder(text_input.input_ids.to(self.text_encoder.device))[0].to(self.unet.device)
if (callback_steps is None) or (
callback_steps is not None and (not isinstance(callback_steps, int) or callback_steps <= 0)
):
raise ValueError(
f"`callback_steps` has to be a positive integer but is {callback_steps} of type"
f" {type(callback_steps)}."
)
if text_embeddings is None:
if isinstance(prompt, str):
batch_size = 1
elif isinstance(prompt, list):
batch_size = len(prompt)
else:
raise ValueError(f"`prompt` has to be of type `str` or `list` but is {type(prompt)}")
# get prompt text embeddings
text_inputs = self.tokenizer(
prompt,
padding="max_length",
max_length=self.tokenizer.model_max_length,
return_tensors="pt",
)
text_input_ids = text_inputs.input_ids
if text_input_ids.shape[-1] > self.tokenizer.model_max_length:
removed_text = self.tokenizer.batch_decode(text_input_ids[:, self.tokenizer.model_max_length :])
print(
"The following part of your input was truncated because CLIP can only handle sequences up to"
f" {self.tokenizer.model_max_length} tokens: {removed_text}"
)
text_input_ids = text_input_ids[:, : self.tokenizer.model_max_length]
text_embeddings = self.text_encoder(text_input_ids.to(self.device))[0]
else:
batch_size = text_embeddings.shape[0]
# duplicate text embeddings for each generation per prompt, using mps friendly method
bs_embed, seq_len, _ = text_embeddings.shape
text_embeddings = text_embeddings.repeat(1, num_images_per_prompt, 1)
text_embeddings = text_embeddings.view(bs_embed * num_images_per_prompt, seq_len, -1)
# here `guidance_scale` is defined analog to the guidance weight `w` of equation (2)
# of the Imagen paper: https://arxiv.org/pdf/2205.11487.pdf . `guidance_scale = 1`
@ -189,13 +358,39 @@ class StableDiffusionPipeline(DiffusionPipeline):
do_classifier_free_guidance = guidance_scale > 1.0
# get unconditional embeddings for classifier free guidance
if do_classifier_free_guidance:
max_length = text_input.input_ids.shape[-1]
uncond_tokens: List[str]
if negative_prompt is None:
uncond_tokens = [""]
elif type(prompt) is not type(negative_prompt):
raise TypeError(
f"`negative_prompt` should be the same type to `prompt`, but got {type(negative_prompt)} !="
f" {type(prompt)}."
)
elif isinstance(negative_prompt, str):
uncond_tokens = [negative_prompt]
elif batch_size != len(negative_prompt):
raise ValueError(
f"`negative_prompt`: {negative_prompt} has batch size {len(negative_prompt)}, but `prompt`:"
f" {prompt} has batch size {batch_size}. Please make sure that passed `negative_prompt` matches"
" the batch size of `prompt`."
)
else:
uncond_tokens = negative_prompt
max_length = self.tokenizer.model_max_length
uncond_input = self.tokenizer(
[""] * batch_size, padding="max_length", max_length=max_length, return_tensors="pt"
)
uncond_embeddings = self.text_encoder(uncond_input.input_ids.to(self.text_encoder.device))[0].to(
self.unet.device
uncond_tokens,
padding="max_length",
max_length=max_length,
truncation=True,
return_tensors="pt",
)
uncond_embeddings = self.text_encoder(uncond_input.input_ids.to(self.device))[0]
# duplicate unconditional embeddings for each generation per prompt, using mps friendly method
seq_len = uncond_embeddings.shape[1]
uncond_embeddings = uncond_embeddings.repeat(batch_size, num_images_per_prompt, 1)
uncond_embeddings = uncond_embeddings.view(batch_size * num_images_per_prompt, seq_len, -1)
# For classifier free guidance, we need to do two forward passes.
# Here we concatenate the unconditional and text embeddings into a single batch
@ -207,30 +402,30 @@ class StableDiffusionPipeline(DiffusionPipeline):
# Unlike in other pipelines, latents need to be generated in the target device
# for 1-to-1 results reproducibility with the CompVis implementation.
# However this currently doesn't work in `mps`.
latents_device = "cpu" if self.device.type == "mps" else self.device
latents_shape = (batch_size, self.unet.in_channels, height // 8, width // 8)
latents_shape = (batch_size * num_images_per_prompt, self.unet.in_channels, height // 8, width // 8)
latents_dtype = text_embeddings.dtype
if latents is None:
latents = torch.randn(
latents_shape,
generator=generator,
device=latents_device,
)
if self.device.type == "mps":
# randn does not exist on mps
latents = torch.randn(latents_shape, generator=generator, device="cpu", dtype=latents_dtype).to(
self.device
)
else:
latents = torch.randn(latents_shape, generator=generator, device=self.device, dtype=latents_dtype)
else:
if latents.shape != latents_shape:
raise ValueError(f"Unexpected latents shape, got {latents.shape}, expected {latents_shape}")
latents = latents.to(self.device)
latents = latents.to(self.device)
# set timesteps
accepts_offset = "offset" in set(inspect.signature(self.scheduler.set_timesteps).parameters.keys())
extra_set_kwargs = {}
if accepts_offset:
extra_set_kwargs["offset"] = 1
self.scheduler.set_timesteps(num_inference_steps)
self.scheduler.set_timesteps(num_inference_steps, **extra_set_kwargs)
# Some schedulers like PNDM have timesteps as arrays
# It's more optimized to move all timesteps to correct device beforehand
timesteps_tensor = self.scheduler.timesteps.to(self.device)
# if we use LMSDiscreteScheduler, let's make sure latents are mulitplied by sigmas
if isinstance(self.scheduler, LMSDiscreteScheduler):
latents = latents * self.scheduler.sigmas[0]
# scale the initial noise by the standard deviation required by the scheduler
latents = latents * self.scheduler.init_noise_sigma
# prepare extra kwargs for the scheduler step, since not all schedulers have the same signature
# eta (η) is only used with the DDIMScheduler, it will be ignored for other schedulers.
@ -241,18 +436,13 @@ class StableDiffusionPipeline(DiffusionPipeline):
if accepts_eta:
extra_step_kwargs["eta"] = eta
for i, t in enumerate(self.progress_bar(self.scheduler.timesteps)):
for i, t in enumerate(self.progress_bar(timesteps_tensor)):
# expand the latents if we are doing classifier free guidance
latent_model_input = torch.cat([latents] * 2) if do_classifier_free_guidance else latents
if isinstance(self.scheduler, LMSDiscreteScheduler):
sigma = self.scheduler.sigmas[i]
# the model input needs to be scaled to match the continuous ODE formulation in K-LMS
latent_model_input = latent_model_input / ((sigma**2 + 1) ** 0.5)
latent_model_input = self.scheduler.scale_model_input(latent_model_input, t)
# predict the noise residual
noise_pred = self.unet(
latent_model_input.to(self.unet.device), t.to(self.unet.device), encoder_hidden_states=text_embeddings
).sample
noise_pred = self.unet(latent_model_input, t, encoder_hidden_states=text_embeddings).sample
# perform guidance
if do_classifier_free_guidance:
@ -260,29 +450,29 @@ class StableDiffusionPipeline(DiffusionPipeline):
noise_pred = noise_pred_uncond + guidance_scale * (noise_pred_text - noise_pred_uncond)
# compute the previous noisy sample x_t -> x_t-1
if isinstance(self.scheduler, LMSDiscreteScheduler):
latents = self.scheduler.step(
noise_pred, i, latents.to(self.unet.device), **extra_step_kwargs
).prev_sample
else:
latents = self.scheduler.step(
noise_pred, t.to(self.unet.device), latents.to(self.unet.device), **extra_step_kwargs
).prev_sample
latents = self.scheduler.step(noise_pred, t, latents, **extra_step_kwargs).prev_sample
# call the callback, if provided
if callback is not None and i % callback_steps == 0:
callback(i, t, latents)
# scale and decode the image latents with vae
latents = 1 / 0.18215 * latents
image = self.vae.decode(latents.to(self.vae.device)).sample
image = self.vae.decode(latents).sample
image = (image / 2 + 0.5).clamp(0, 1)
image = image.to(self.vae.device).to(self.vae.device).cpu().permute(0, 2, 3, 1).numpy()
# run safety checker
safety_cheker_input = (
self.feature_extractor(self.numpy_to_pil(image), return_tensors="pt")
.to(self.vae.device)
.to(self.vae.dtype)
)
image, has_nsfw_concept = self.safety_checker(images=image, clip_input=safety_cheker_input.pixel_values)
# we always cast to float32 as this does not cause significant overhead and is compatible with bfloa16
image = image.cpu().permute(0, 2, 3, 1).float().numpy()
if self.safety_checker is not None:
safety_checker_input = self.feature_extractor(self.numpy_to_pil(image), return_tensors="pt").to(
self.device
)
image, has_nsfw_concept = self.safety_checker(
images=image, clip_input=safety_checker_input.pixel_values.to(text_embeddings.dtype)
)
else:
has_nsfw_concept = None
if output_type == "pil":
image = self.numpy_to_pil(image)
@ -290,4 +480,378 @@ class StableDiffusionPipeline(DiffusionPipeline):
if not return_dict:
return (image, has_nsfw_concept)
return StableDiffusionPipelineOutput(images=image, nsfw_content_detected=has_nsfw_concept)
return StableDiffusionPipelineOutput(images=image, nsfw_content_detected=has_nsfw_concept)
def generate_inputs(self, prompt_a, prompt_b, seed_a, seed_b, noise_shape, T, batch_size):
embeds_a = self.embed_text(prompt_a)
embeds_b = self.embed_text(prompt_b)
latents_a = self.init_noise(seed_a, noise_shape)
latents_b = self.init_noise(seed_b, noise_shape)
batch_idx = 0
embeds_batch, noise_batch = None, None
for i, t in enumerate(T):
embeds = torch.lerp(embeds_a, embeds_b, t)
noise = slerp(float(t), latents_a, latents_b)
embeds_batch = embeds if embeds_batch is None else torch.cat([embeds_batch, embeds])
noise_batch = noise if noise_batch is None else torch.cat([noise_batch, noise])
batch_is_ready = embeds_batch.shape[0] == batch_size or i + 1 == T.shape[0]
if not batch_is_ready:
continue
yield batch_idx, embeds_batch, noise_batch
batch_idx += 1
del embeds_batch, noise_batch
torch.cuda.empty_cache()
embeds_batch, noise_batch = None, None
def make_clip_frames(
self,
prompt_a: str,
prompt_b: str,
seed_a: int,
seed_b: int,
num_interpolation_steps: int = 5,
save_path: Union[str, Path] = "outputs/",
num_inference_steps: int = 50,
guidance_scale: float = 7.5,
eta: float = 0.0,
height: int = 512,
width: int = 512,
upsample: bool = False,
batch_size: int = 1,
image_file_ext: str = ".png",
T: np.ndarray = None,
skip: int = 0,
):
save_path = Path(save_path)
save_path.mkdir(parents=True, exist_ok=True)
T = T if T is not None else np.linspace(0.0, 1.0, num_interpolation_steps)
if T.shape[0] != num_interpolation_steps:
raise ValueError(f"Unexpected T shape, got {T.shape}, expected dim 0 to be {num_interpolation_steps}")
if upsample:
if getattr(self, "upsampler", None) is None:
self.upsampler = RealESRGANModel.from_pretrained("nateraw/real-esrgan")
self.upsampler.to(self.device)
batch_generator = self.generate_inputs(
prompt_a,
prompt_b,
seed_a,
seed_b,
(1, self.unet.in_channels, height // 8, width // 8),
T[skip:],
batch_size,
)
frame_index = skip
for _, embeds_batch, noise_batch in batch_generator:
with torch.autocast("cuda"):
outputs = self(
latents=noise_batch,
text_embeddings=embeds_batch,
height=height,
width=width,
guidance_scale=guidance_scale,
eta=eta,
num_inference_steps=num_inference_steps,
output_type="pil" if not upsample else "numpy",
)["images"]
for image in outputs:
frame_filepath = save_path / (f"frame%06d{image_file_ext}" % frame_index)
image = image if not upsample else self.upsampler(image)
image.save(frame_filepath)
frame_index += 1
def walk(
self,
prompts: Optional[List[str]] = None,
seeds: Optional[List[int]] = None,
num_interpolation_steps: Optional[Union[int, List[int]]] = 5, # int or list of int
output_dir: Optional[str] = "./dreams",
name: Optional[str] = None,
image_file_ext: Optional[str] = ".png",
fps: Optional[int] = 30,
num_inference_steps: Optional[int] = 50,
guidance_scale: Optional[float] = 7.5,
eta: Optional[float] = 0.0,
height: Optional[int] = 512,
width: Optional[int] = 512,
upsample: Optional[bool] = False,
batch_size: Optional[int] = 1,
resume: Optional[bool] = False,
audio_filepath: str = None,
audio_start_sec: Optional[Union[int, float]] = None,
margin: Optional[float] = 1.0,
smooth: Optional[float] = 0.0,
):
"""Generate a video from a sequence of prompts and seeds. Optionally, add audio to the
video to interpolate to the intensity of the audio.
Args:
prompts (Optional[List[str]], optional):
list of text prompts. Defaults to None.
seeds (Optional[List[int]], optional):
list of random seeds corresponding to prompts. Defaults to None.
num_interpolation_steps (Union[int, List[int]], *optional*):
How many interpolation steps between each prompt. Defaults to None.
output_dir (Optional[str], optional):
Where to save the video. Defaults to './dreams'.
name (Optional[str], optional):
Name of the subdirectory of output_dir. Defaults to None.
image_file_ext (Optional[str], *optional*, defaults to '.png'):
The extension to use when writing video frames.
fps (Optional[int], *optional*, defaults to 30):
The frames per second in the resulting output videos.
num_inference_steps (Optional[int], *optional*, defaults to 50):
The number of denoising steps. More denoising steps usually lead to a higher quality image at the
expense of slower inference.
guidance_scale (Optional[float], *optional*, defaults to 7.5):
Guidance scale as defined in [Classifier-Free Diffusion Guidance](https://arxiv.org/abs/2207.12598).
`guidance_scale` is defined as `w` of equation 2. of [Imagen
Paper](https://arxiv.org/pdf/2205.11487.pdf). Guidance scale is enabled by setting `guidance_scale >
1`. Higher guidance scale encourages to generate images that are closely linked to the text `prompt`,
usually at the expense of lower image quality.
eta (Optional[float], *optional*, defaults to 0.0):
Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to
[`schedulers.DDIMScheduler`], will be ignored for others.
height (Optional[int], *optional*, defaults to 512):
height of the images to generate.
width (Optional[int], *optional*, defaults to 512):
width of the images to generate.
upsample (Optional[bool], *optional*, defaults to False):
When True, upsamples images with realesrgan.
batch_size (Optional[int], *optional*, defaults to 1):
Number of images to generate at once.
resume (Optional[bool], *optional*, defaults to False):
When True, resumes from the last frame in the output directory based
on available prompt config. Requires you to provide the `name` argument.
audio_filepath (str, *optional*, defaults to None):
Optional path to an audio file to influence the interpolation rate.
audio_start_sec (Optional[Union[int, float]], *optional*, defaults to 0):
Global start time of the provided audio_filepath.
margin (Optional[float], *optional*, defaults to 1.0):
Margin from librosa hpss to use for audio interpolation.
smooth (Optional[float], *optional*, defaults to 0.0):
Smoothness of the audio interpolation. 1.0 means linear interpolation.
This function will create sub directories for each prompt and seed pair.
For example, if you provide the following prompts and seeds:
```
prompts = ['a dog', 'a cat', 'a bird']
seeds = [1, 2, 3]
num_interpolation_steps = 5
output_dir = 'output_dir'
name = 'name'
fps = 5
```
Then the following directories will be created:
```
output_dir
name
name_000000
frame000000.png
...
frame000004.png
name_000000.mp4
name_000001
frame000000.png
...
frame000004.png
name_000001.mp4
...
name.mp4
| | prompt_config.json
```
Returns:
str: The resulting video filepath. This video includes all sub directories' video clips.
"""
output_path = Path(output_dir)
name = name or time.strftime("%Y%m%d-%H%M%S")
save_path_root = output_path / name
save_path_root.mkdir(parents=True, exist_ok=True)
# Where the final video of all the clips combined will be saved
output_filepath = save_path_root / f"{name}.mp4"
# If using same number of interpolation steps between, we turn into list
if not resume and isinstance(num_interpolation_steps, int):
num_interpolation_steps = [num_interpolation_steps] * (len(prompts) - 1)
if not resume:
audio_start_sec = audio_start_sec or 0
# Save/reload prompt config
prompt_config_path = save_path_root / "prompt_config.json"
if not resume:
prompt_config_path.write_text(
json.dumps(
dict(
prompts=prompts,
seeds=seeds,
num_interpolation_steps=num_interpolation_steps,
fps=fps,
num_inference_steps=num_inference_steps,
guidance_scale=guidance_scale,
eta=eta,
upsample=upsample,
height=height,
width=width,
audio_filepath=audio_filepath,
audio_start_sec=audio_start_sec,
),
indent=2,
sort_keys=False,
)
)
else:
data = json.load(open(prompt_config_path))
prompts = data["prompts"]
seeds = data["seeds"]
num_interpolation_steps = data["num_interpolation_steps"]
fps = data["fps"]
num_inference_steps = data["num_inference_steps"]
guidance_scale = data["guidance_scale"]
eta = data["eta"]
upsample = data["upsample"]
height = data["height"]
width = data["width"]
audio_filepath = data["audio_filepath"]
audio_start_sec = data["audio_start_sec"]
for i, (prompt_a, prompt_b, seed_a, seed_b, num_step) in enumerate(
zip(prompts, prompts[1:], seeds, seeds[1:], num_interpolation_steps)
):
# {name}_000000 / {name}_000001 / ...
save_path = save_path_root / f"{name}_{i:06d}"
# Where the individual clips will be saved
step_output_filepath = save_path / f"{name}_{i:06d}.mp4"
# Determine if we need to resume from a previous run
skip = 0
if resume:
if step_output_filepath.exists():
print(f"Skipping {save_path} because frames already exist")
continue
existing_frames = sorted(save_path.glob(f"*{image_file_ext}"))
if existing_frames:
skip = int(existing_frames[-1].stem[-6:]) + 1
if skip + 1 >= num_step:
print(f"Skipping {save_path} because frames already exist")
continue
print(f"Resuming {save_path.name} from frame {skip}")
audio_offset = audio_start_sec + sum(num_interpolation_steps[:i]) / fps
audio_duration = num_step / fps
self.make_clip_frames(
prompt_a,
prompt_b,
seed_a,
seed_b,
num_interpolation_steps=num_step,
save_path=save_path,
num_inference_steps=num_inference_steps,
guidance_scale=guidance_scale,
eta=eta,
height=height,
width=width,
upsample=upsample,
batch_size=batch_size,
skip=skip,
T=get_timesteps_arr(
audio_filepath,
offset=audio_offset,
duration=audio_duration,
fps=fps,
margin=margin,
smooth=smooth,
)
if audio_filepath
else None,
)
make_video_pyav(
save_path,
audio_filepath=audio_filepath,
fps=fps,
output_filepath=step_output_filepath,
glob_pattern=f"*{image_file_ext}",
audio_offset=audio_offset,
audio_duration=audio_duration,
sr=44100,
)
return make_video_pyav(
save_path_root,
audio_filepath=audio_filepath,
fps=fps,
audio_offset=audio_start_sec,
audio_duration=sum(num_interpolation_steps) / fps,
output_filepath=output_filepath,
glob_pattern=f"**/*{image_file_ext}",
sr=44100,
)
def embed_text(self, text):
"""Helper to embed some text"""
with torch.autocast("cuda"):
text_input = self.tokenizer(
text,
padding="max_length",
max_length=self.tokenizer.model_max_length,
truncation=True,
return_tensors="pt",
)
with torch.no_grad():
embed = self.text_encoder(text_input.input_ids.to(self.device))[0]
return embed
def init_noise(self, seed, noise_shape):
"""Helper to initialize noise"""
# randn does not exist on mps, so we create noise on CPU here and move it to the device after initialization
if self.device.type == "mps":
noise = torch.randn(
noise_shape,
device='cpu',
generator=torch.Generator(device='cpu').manual_seed(seed),
).to(self.device)
else:
noise = torch.randn(
noise_shape,
device=self.device,
generator=torch.Generator(device=self.device).manual_seed(seed),
)
return noise
@classmethod
def from_pretrained(cls, *args, tiled=False, **kwargs):
"""Same as diffusers `from_pretrained` but with tiled option, which makes images tilable"""
if tiled:
def patch_conv(**patch):
cls = nn.Conv2d
init = cls.__init__
def __init__(self, *args, **kwargs):
return init(self, *args, **kwargs, **patch)
cls.__init__ = __init__
patch_conv(padding_mode="circular")
pipeline = super().from_pretrained(*args, **kwargs)
pipeline.tiled = tiled
return pipeline

View File

@ -1,6 +1,6 @@
# This file is part of stable-diffusion-webui (https://github.com/sd-webui/stable-diffusion-webui/).
# This file is part of sygil-webui (https://github.com/Sygil-Dev/sygil-webui/).
# Copyright 2022 sd-webui team.
# Copyright 2022 Sygil-Dev team.
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
@ -12,7 +12,7 @@
# GNU Affero General Public License for more details.
# You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
# along with this program. If not, see <http://www.gnu.org/licenses/>.
# base webui import and utils.
from sd_utils import *
@ -28,7 +28,7 @@ from transformers import CLIPTextModel, CLIPTokenizer
import argparse
import itertools
import math
import os
import os, sys
import random
#import datetime
#from pathlib import Path
@ -210,22 +210,22 @@ def freeze_params(params):
param.requires_grad = False
def save_resume_file(basepath, extra = {}, config=''):
def save_resume_file(basepath, extra = {}, config=''):
info = {"args": config["args"]}
info["args"].update(extra)
with open(f"{os.path.join(basepath, 'resume.json')}", "w") as f:
#print (info)
json.dump(info, f, indent=4)
with open(f"{basepath}/token_identifier.txt", "w") as f:
f.write(f"{config['args']['placeholder_token']}")
with open(f"{basepath}/type_of_concept.txt", "w") as f:
f.write(f"{config['args']['learnable_property']}")
config['args'] = info["args"]
return config['args']
class Checkpointer:
@ -277,7 +277,7 @@ class Checkpointer:
else:
torch.save(learned_embeds_dict, f"{checkpoints_path}/{filename}")
torch.save(learned_embeds_dict, f"{checkpoints_path}/last.bin")
del unwrapped
del learned_embeds
@ -286,15 +286,15 @@ class Checkpointer:
def save_samples(self, step, text_encoder, height, width, guidance_scale, eta, num_inference_steps):
samples_path = f"{self.output_dir}/concept_images"
os.makedirs(samples_path, exist_ok=True)
#if "checker" not in server_state['textual_inversion']:
#with server_state_lock['textual_inversion']["checker"]:
server_state['textual_inversion']["checker"] = NoCheck()
#if "unwrapped" not in server_state['textual_inversion']:
# with server_state_lock['textual_inversion']["unwrapped"]:
server_state['textual_inversion']["unwrapped"] = self.accelerator.unwrap_model(text_encoder)
#if "pipeline" not in server_state['textual_inversion']:
# with server_state_lock['textual_inversion']["pipeline"]:
# Save a sample image
@ -309,7 +309,7 @@ class Checkpointer:
safety_checker=NoCheck(),
feature_extractor=CLIPFeatureExtractor.from_pretrained("openai/clip-vit-base-patch32"),
).to("cuda")
server_state['textual_inversion']["pipeline"].enable_attention_slicing()
if self.stable_sample_batches > 0:
@ -333,7 +333,7 @@ class Checkpointer:
num_inference_steps=num_inference_steps,
output_type='pil'
)["sample"]
for idx, im in enumerate(samples):
filename = f"stable_sample_%d_%d_step_%d.png" % (i+1, idx+1, step)
im.save(f"{samples_path}/{filename}")
@ -365,28 +365,28 @@ class Checkpointer:
#@retry(RuntimeError, tries=5)
def textual_inversion(config):
print ("Running textual inversion.")
#if "pipeline" in server_state["textual_inversion"]:
#del server_state['textual_inversion']["checker"]
#del server_state['textual_inversion']["unwrapped"]
#del server_state['textual_inversion']["pipeline"]
#torch.cuda.empty_cache()
global_step_offset = 0
#print(config['args']['resume_from'])
if config['args']['resume_from']:
try:
basepath = f"{config['args']['resume_from']}"
with open(f"{basepath}/resume.json", 'r') as f:
state = json.load(f)
global_step_offset = state["args"].get("global_step", 0)
print("Resuming state from %s" % config['args']['resume_from'])
print("We've trained %d steps so far" % global_step_offset)
except json.decoder.JSONDecodeError:
pass
else:
@ -398,7 +398,7 @@ def textual_inversion(config):
gradient_accumulation_steps=config['args']['gradient_accumulation_steps'],
mixed_precision=config['args']['mixed_precision']
)
# If passed along, set the training seed.
if config['args']['seed']:
set_seed(config['args']['seed'])
@ -442,9 +442,9 @@ def textual_inversion(config):
server_state['textual_inversion']["vae"] = AutoencoderKL.from_pretrained(
config['args']['pretrained_model_name_or_path'] + '/vae',
)
#if "unet" not in server_state['textual_inversion']:
#with server_state_lock['textual_inversion']["unet"]:
#with server_state_lock['textual_inversion']["unet"]:
server_state['textual_inversion']["unet"] = UNet2DConditionModel.from_pretrained(
config['args']['pretrained_model_name_or_path'] + '/unet',
)
@ -640,18 +640,18 @@ def textual_inversion(config):
"global_step": global_step + global_step_offset,
"resume_checkpoint": f"{basepath}/checkpoints/last.bin"
}, config)
checkpointer.save_samples(
global_step + global_step_offset,
server_state['textual_inversion']["text_encoder"],
config['args']['resolution'], config['args'][
'resolution'], 7.5, 0.0, config['args']['sample_steps'])
checkpointer.checkpoint(
global_step + global_step_offset,
server_state['textual_inversion']["text_encoder"],
path=f"{basepath}/learned_embeds.bin"
)
)
#except KeyError:
#raise StopException
@ -659,7 +659,7 @@ def textual_inversion(config):
progress_bar.set_postfix(**logs)
#accelerator.log(logs, step=global_step)
#try:
if global_step >= config['args']['max_train_steps']:
break
@ -686,166 +686,166 @@ def textual_inversion(config):
except (KeyboardInterrupt, StopException) as e:
print(f"Received Streamlit StopException or KeyboardInterrupt")
if accelerator.is_main_process:
print("Interrupted, saving checkpoint and resume state...")
checkpointer.checkpoint(global_step + global_step_offset, server_state['textual_inversion']["text_encoder"])
config['args'] = save_resume_file(basepath, {
"global_step": global_step + global_step_offset,
"resume_checkpoint": f"{basepath}/checkpoints/last.bin"
}, config)
checkpointer.checkpoint(
global_step + global_step_offset,
server_state['textual_inversion']["text_encoder"],
path=f"{basepath}/learned_embeds.bin"
)
quit()
def layout():
with st.form("textual-inversion"):
#st.info("Under Construction. :construction_worker:")
#parser = argparse.ArgumentParser(description="Simple example of a training script.")
set_page_title("Textual Inversion - Stable Diffusion Playground")
config_tab, output_tab, tensorboard_tab = st.tabs(["Textual Inversion Config", "Ouput", "TensorBoard"])
with config_tab:
col1, col2, col3, col4, col5 = st.columns(5, gap='large')
if "textual_inversion" not in st.session_state:
st.session_state["textual_inversion"] = {}
if "textual_inversion" not in server_state:
server_state["textual_inversion"] = {}
if "args" not in st.session_state["textual_inversion"]:
st.session_state["textual_inversion"]["args"] = {}
with col1:
st.session_state["textual_inversion"]["args"]["pretrained_model_name_or_path"] = st.text_input("Pretrained Model Path",
value=st.session_state["defaults"].textual_inversion.pretrained_model_name_or_path,
help="Path to pretrained model or model identifier from huggingface.co/models.")
st.session_state["textual_inversion"]["args"]["tokenizer_name"] = st.text_input("Tokenizer Name",
value=st.session_state["defaults"].textual_inversion.tokenizer_name,
st.session_state["textual_inversion"]["args"]["tokenizer_name"] = st.text_input("Tokenizer Name",
value=st.session_state["defaults"].textual_inversion.tokenizer_name,
help="Pretrained tokenizer name or path if not the same as model_name")
st.session_state["textual_inversion"]["args"]["train_data_dir"] = st.text_input("train_data_dir", value="", help="A folder containing the training data.")
st.session_state["textual_inversion"]["args"]["placeholder_token"] = st.text_input("Placeholder Token", value="", help="A token to use as a placeholder for the concept.")
st.session_state["textual_inversion"]["args"]["initializer_token"] = st.text_input("Initializer Token", value="", help="A token to use as initializer word.")
st.session_state["textual_inversion"]["args"]["learnable_property"] = st.selectbox("Learnable Property", ["object", "style"], index=0, help="Choose between 'object' and 'style'")
st.session_state["textual_inversion"]["args"]["repeats"] = int(st.text_input("Number of times to Repeat", value=100, help="How many times to repeat the training data."))
with col2:
st.session_state["textual_inversion"]["args"]["output_dir"] = st.text_input("Output Directory",
value=str(os.path.join("outputs", "textual_inversion")),
help="The output directory where the model predictions and checkpoints will be written.")
st.session_state["textual_inversion"]["args"]["seed"] = seed_to_int(st.text_input("Seed", value=0,
help="A seed for reproducible training, if left empty a random one will be generated. Default: 0"))
st.session_state["textual_inversion"]["args"]["resolution"] = int(st.text_input("Resolution", value=512,
help="The resolution for input images, all the images in the train/validation dataset will be resized to this resolution"))
st.session_state["textual_inversion"]["args"]["center_crop"] = st.checkbox("Center Image", value=True, help="Whether to center crop images before resizing to resolution")
st.session_state["textual_inversion"]["args"]["train_batch_size"] = int(st.text_input("Train Batch Size", value=1, help="Batch size (per device) for the training dataloader."))
st.session_state["textual_inversion"]["args"]["num_train_epochs"] = int(st.text_input("Number of Steps to Train", value=100, help="Number of steps to train."))
st.session_state["textual_inversion"]["args"]["max_train_steps"] = int(st.text_input("Max Number of Steps to Train", value=5000,
help="Total number of training steps to perform. If provided, overrides 'Number of Steps to Train'."))
with col3:
st.session_state["textual_inversion"]["args"]["gradient_accumulation_steps"] = int(st.text_input("Gradient Accumulation Steps", value=1,
help="Number of updates steps to accumulate before performing a backward/update pass."))
st.session_state["textual_inversion"]["args"]["learning_rate"] = float(st.text_input("Learning Rate", value=5.0e-04,
help="Initial learning rate (after the potential warmup period) to use."))
st.session_state["textual_inversion"]["args"]["scale_lr"] = st.checkbox("Scale Learning Rate", value=True,
help="Scale the learning rate by the number of GPUs, gradient accumulation steps, and batch size.")
st.session_state["textual_inversion"]["args"]["lr_scheduler"] = st.text_input("Learning Rate Scheduler", value="constant",
help=("The scheduler type to use. Choose between ['linear', 'cosine', 'cosine_with_restarts', 'polynomial',"
" 'constant', 'constant_with_warmup']" ))
st.session_state["textual_inversion"]["args"]["lr_warmup_steps"] = int(st.text_input("Learning Rate Warmup Steps", value=500, help="Number of steps for the warmup in the lr scheduler."))
st.session_state["textual_inversion"]["args"]["adam_beta1"] = float(st.text_input("Adam Beta 1", value=0.9, help="The beta1 parameter for the Adam optimizer."))
st.session_state["textual_inversion"]["args"]["adam_beta2"] = float(st.text_input("Adam Beta 2", value=0.999, help="The beta2 parameter for the Adam optimizer."))
st.session_state["textual_inversion"]["args"]["adam_weight_decay"] = float(st.text_input("Adam Weight Decay", value=1e-2, help="Weight decay to use."))
st.session_state["textual_inversion"]["args"]["adam_epsilon"] = float(st.text_input("Adam Epsilon", value=1e-08, help="Epsilon value for the Adam optimizer"))
with col4:
st.session_state["textual_inversion"]["args"]["mixed_precision"] = st.selectbox("Mixed Precision", ["no", "fp16", "bf16"], index=1,
help="Whether to use mixed precision. Choose" "between fp16 and bf16 (bfloat16). Bf16 requires PyTorch >= 1.10."
"and an Nvidia Ampere GPU.")
st.session_state["textual_inversion"]["args"]["local_rank"] = int(st.text_input("Local Rank", value=1, help="For distributed training: local_rank"))
st.session_state["textual_inversion"]["args"]["checkpoint_frequency"] = int(st.text_input("Checkpoint Frequency", value=500, help="How often to save a checkpoint and sample image"))
# stable_sample_batches is crashing when saving the samples so for now I will disable it util its fixed.
#st.session_state["textual_inversion"]["args"]["stable_sample_batches"] = int(st.text_input("Stable Sample Batches", value=0,
#help="Number of fixed seed sample batches to generate per checkpoint"))
st.session_state["textual_inversion"]["args"]["stable_sample_batches"] = 0
st.session_state["textual_inversion"]["args"]["stable_sample_batches"] = 0
st.session_state["textual_inversion"]["args"]["random_sample_batches"] = int(st.text_input("Random Sample Batches", value=2,
help="Number of random seed sample batches to generate per checkpoint"))
st.session_state["textual_inversion"]["args"]["sample_batch_size"] = int(st.text_input("Sample Batch Size", value=1, help="Number of samples to generate per batch"))
st.session_state["textual_inversion"]["args"]["sample_steps"] = int(st.text_input("Sample Steps", value=100,
help="Number of steps for sample generation. Higher values will result in more detailed samples, but longer runtimes."))
st.session_state["textual_inversion"]["args"]["custom_templates"] = st.text_input("Custom Templates", value="",
help="A semicolon-delimited list of custom template to use for samples, using {} as a placeholder for the concept.")
with col5:
with col5:
st.session_state["textual_inversion"]["args"]["resume"] = st.checkbox(label="Resume Previous Run?", value=False,
help="Resume previous run, if a valid resume.json file is on the output dir \
it will be used, otherwise if the 'Resume From' field bellow contains a valid resume.json file \
that one will be used.")
st.session_state["textual_inversion"]["args"]["resume_from"] = st.text_input(label="Resume From", help="Path to a directory to resume training from (ie, logs/token_name)")
#st.session_state["textual_inversion"]["args"]["resume_checkpoint"] = st.file_uploader("Resume Checkpoint", type=["bin"],
#help="Path to a specific checkpoint to resume training from (ie, logs/token_name/checkpoints/something.bin).")
#st.session_state["textual_inversion"]["args"]["st.session_state["textual_inversion"]"] = st.file_uploader("st.session_state["textual_inversion"] File", type=["json"],
#help="Path to a JSON st.session_state["textual_inversion"]uration file containing arguments for invoking this script."
#"If resume_from is given, its resume.json takes priority over this.")
#
#
#print (os.path.join(st.session_state["textual_inversion"]["args"]["output_dir"],st.session_state["textual_inversion"]["args"]["placeholder_token"].strip("<>"),"resume.json"))
#print (os.path.exists(os.path.join(st.session_state["textual_inversion"]["args"]["output_dir"],st.session_state["textual_inversion"]["args"]["placeholder_token"].strip("<>"),"resume.json")))
if os.path.exists(os.path.join(st.session_state["textual_inversion"]["args"]["output_dir"],st.session_state["textual_inversion"]["args"]["placeholder_token"].strip("<>"),"resume.json")):
st.session_state["textual_inversion"]["args"]["resume_from"] = os.path.join(
st.session_state["textual_inversion"]["args"]["output_dir"], st.session_state["textual_inversion"]["args"]["placeholder_token"].strip("<>"))
#print (st.session_state["textual_inversion"]["args"]["resume_from"])
if os.path.exists(os.path.join(st.session_state["textual_inversion"]["args"]["output_dir"],st.session_state["textual_inversion"]["args"]["placeholder_token"].strip("<>"), "checkpoints","last.bin")):
st.session_state["textual_inversion"]["args"]["resume_checkpoint"] = os.path.join(
st.session_state["textual_inversion"]["args"]["output_dir"], st.session_state["textual_inversion"]["args"]["placeholder_token"].strip("<>"), "checkpoints","last.bin")
st.session_state["textual_inversion"]["args"]["output_dir"], st.session_state["textual_inversion"]["args"]["placeholder_token"].strip("<>"), "checkpoints","last.bin")
#if "resume_from" in st.session_state["textual_inversion"]["args"]:
#if st.session_state["textual_inversion"]["args"]["resume_from"]:
#if os.path.exists(os.path.join(st.session_state["textual_inversion"]['args']['resume_from'], "resume.json")):
#if os.path.exists(os.path.join(st.session_state["textual_inversion"]['args']['resume_from'], "resume.json")):
#with open(os.path.join(st.session_state["textual_inversion"]['args']['resume_from'], "resume.json"), 'rt') as f:
#try:
#resume_json = json.load(f)["args"]
@ -854,87 +854,86 @@ def layout():
#st.session_state["textual_inversion"]["args"]["output_dir"], st.session_state["textual_inversion"]["args"]["placeholder_token"].strip("<>"))
#except json.decoder.JSONDecodeError:
#pass
#print(st.session_state["textual_inversion"]["args"])
#print(st.session_state["textual_inversion"]["args"]['resume_from'])
#elif st.session_state["textual_inversion"]["args"]["st.session_state["textual_inversion"]"] is not None:
#with open(st.session_state["textual_inversion"]["args"]["st.session_state["textual_inversion"]"], 'rt') as f:
#args = parser.parse_args(namespace=argparse.Namespace(**json.load(f)["args"]))
env_local_rank = int(os.environ.get("LOCAL_RANK", -1))
if env_local_rank != -1 and env_local_rank != st.session_state["textual_inversion"]["args"]["local_rank"]:
st.session_state["textual_inversion"]["args"]["local_rank"] = env_local_rank
if st.session_state["textual_inversion"]["args"]["train_data_dir"] is None:
st.error("You must specify --train_data_dir")
if st.session_state["textual_inversion"]["args"]["pretrained_model_name_or_path"] is None:
st.error("You must specify --pretrained_model_name_or_path")
if st.session_state["textual_inversion"]["args"]["placeholder_token"] is None:
st.error("You must specify --placeholder_token")
if st.session_state["textual_inversion"]["args"]["initializer_token"] is None:
st.error("You must specify --initializer_token")
if st.session_state["textual_inversion"]["args"]["output_dir"] is None:
st.error("You must specify --output_dir")
# add a spacer and the submit button for the form.
st.session_state["textual_inversion"]["message"] = st.empty()
st.session_state["textual_inversion"]["progress_bar"] = st.empty()
st.write("---")
submit = st.form_submit_button("Run",help="")
if submit:
if "pipe" in st.session_state:
del st.session_state["pipe"]
if "model" in st.session_state:
del st.session_state["model"]
set_page_title("Running Textual Inversion - Stable Diffusion WebUI")
#st.session_state["textual_inversion"]["message"].info("Textual Inversion Running. For more info check the progress on your console or the Ouput Tab.")
try:
#try:
# run textual inversion.
config = st.session_state['textual_inversion']
textual_inversion(config)
textual_inversion(config)
#except RuntimeError:
#if "pipeline" in server_state["textual_inversion"]:
#del server_state['textual_inversion']["checker"]
#del server_state['textual_inversion']["unwrapped"]
#del server_state['textual_inversion']["pipeline"]
#del server_state['textual_inversion']["pipeline"]
# run textual inversion.
#config = st.session_state['textual_inversion']
#textual_inversion(config)
#textual_inversion(config)
set_page_title("Textual Inversion - Stable Diffusion WebUI")
except StopException:
set_page_title("Textual Inversion - Stable Diffusion WebUI")
print(f"Received Streamlit StopException")
st.session_state["textual_inversion"]["message"].empty()
#
with output_tab:
st.info("Under Construction. :construction_worker:")
#st.info("Nothing to show yet. Maybe try running some training first.")
#st.session_state["textual_inversion"]["preview_image"] = st.empty()
#st.session_state["textual_inversion"]["progress_bar"] = st.empty()
#st.session_state["textual_inversion"]["progress_bar"] = st.empty()
with tensorboard_tab:
#st.info("Under Construction. :construction_worker:")
# Start TensorBoard
st_tensorboard(logdir=os.path.join("outputs", "textual_inversion"), port=8888)

View File

@ -1,6 +1,6 @@
# This file is part of stable-diffusion-webui (https://github.com/sd-webui/stable-diffusion-webui/).
# This file is part of sygil-webui (https://github.com/Sygil-Dev/sygil-webui/).
# Copyright 2022 sd-webui team.
# Copyright 2022 Sygil-Dev team.
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
@ -25,16 +25,19 @@ from streamlit.elements.image import image_to_url
#other imports
import uuid
from typing import Union
from ldm.models.diffusion.ddim import DDIMSampler
from ldm.models.diffusion.plms import PLMSSampler
# Temp imports
# streamlit components
from custom_components import sygil_suggestions
# Temp imports
# end of imports
#---------------------------------------------------------------------------------------------------------------
sygil_suggestions.init()
try:
# this silences the annoying "Some weights of the model checkpoint were not used when initializing..." message at start.
@ -90,81 +93,299 @@ class plugin_info():
isTab = True
displayPriority = 1
@logger.catch(reraise=True)
def stable_horde(outpath, prompt, seed, sampler_name, save_grid, batch_size,
n_iter, steps, cfg_scale, width, height, prompt_matrix, use_GFPGAN, GFPGAN_model,
use_RealESRGAN, realesrgan_model_name, use_LDSR,
LDSR_model_name, ddim_eta, normalize_prompt_weights,
save_individual_images, sort_samples, write_info_files,
jpg_sample, variant_amount, variant_seed, api_key,
nsfw=True, censor_nsfw=False):
log = []
log.append("Generating image with Stable Horde.")
st.session_state["progress_bar_text"].code('\n'.join(log), language='')
# start time after garbage collection (or before?)
start_time = time.time()
# We will use this date here later for the folder name, need to start_time if not need
run_start_dt = datetime.datetime.now()
mem_mon = MemUsageMonitor('MemMon')
mem_mon.start()
os.makedirs(outpath, exist_ok=True)
sample_path = os.path.join(outpath, "samples")
os.makedirs(sample_path, exist_ok=True)
params = {
"sampler_name": "k_euler",
"toggles": [1,4],
"cfg_scale": cfg_scale,
"seed": str(seed),
"width": width,
"height": height,
"seed_variation": variant_seed if variant_seed else 1,
"steps": int(steps),
"n": int(n_iter)
# You can put extra params here if you wish
}
final_submit_dict = {
"prompt": prompt,
"params": params,
"nsfw": nsfw,
"censor_nsfw": censor_nsfw,
"trusted_workers": True,
"workers": []
}
log.append(final_submit_dict)
headers = {"apikey": api_key}
logger.debug(final_submit_dict)
st.session_state["progress_bar_text"].code('\n'.join(str(log)), language='')
horde_url = "https://stablehorde.net"
submit_req = requests.post(f'{horde_url}/api/v2/generate/async', json = final_submit_dict, headers = headers)
if submit_req.ok:
submit_results = submit_req.json()
logger.debug(submit_results)
log.append(submit_results)
st.session_state["progress_bar_text"].code(''.join(str(log)), language='')
req_id = submit_results['id']
is_done = False
while not is_done:
chk_req = requests.get(f'{horde_url}/api/v2/generate/check/{req_id}')
if not chk_req.ok:
logger.error(chk_req.text)
return
chk_results = chk_req.json()
logger.info(chk_results)
is_done = chk_results['done']
time.sleep(1)
retrieve_req = requests.get(f'{horde_url}/api/v2/generate/status/{req_id}')
if not retrieve_req.ok:
logger.error(retrieve_req.text)
return
results_json = retrieve_req.json()
# logger.debug(results_json)
results = results_json['generations']
output_images = []
comments = []
prompt_matrix_parts = []
if not st.session_state['defaults'].general.no_verify_input:
try:
check_prompt_length(prompt, comments)
except:
import traceback
logger.info("Error verifying input:", file=sys.stderr)
logger.info(traceback.format_exc(), file=sys.stderr)
all_prompts = batch_size * n_iter * [prompt]
all_seeds = [seed + x for x in range(len(all_prompts))]
for iter in range(len(results)):
b64img = results[iter]["img"]
base64_bytes = b64img.encode('utf-8')
img_bytes = base64.b64decode(base64_bytes)
img = Image.open(BytesIO(img_bytes))
sanitized_prompt = slugify(prompt)
prompts = all_prompts[iter * batch_size:(iter + 1) * batch_size]
#captions = prompt_matrix_parts[n * batch_size:(n + 1) * batch_size]
seeds = all_seeds[iter * batch_size:(iter + 1) * batch_size]
if sort_samples:
full_path = os.path.join(os.getcwd(), sample_path, sanitized_prompt)
sanitized_prompt = sanitized_prompt[:200-len(full_path)]
sample_path_i = os.path.join(sample_path, sanitized_prompt)
#print(f"output folder length: {len(os.path.join(os.getcwd(), sample_path_i))}")
#print(os.path.join(os.getcwd(), sample_path_i))
os.makedirs(sample_path_i, exist_ok=True)
base_count = get_next_sequence_number(sample_path_i)
filename = f"{base_count:05}-{steps}_{sampler_name}_{seeds[iter]}"
else:
full_path = os.path.join(os.getcwd(), sample_path)
sample_path_i = sample_path
base_count = get_next_sequence_number(sample_path_i)
filename = f"{base_count:05}-{steps}_{sampler_name}_{seed}_{sanitized_prompt}"[:200-len(full_path)] #same as before
save_sample(img, sample_path_i, filename, jpg_sample, prompts, seeds, width, height, steps, cfg_scale,
normalize_prompt_weights, use_GFPGAN, write_info_files, prompt_matrix, init_img=None,
denoising_strength=0.75, resize_mode=None, uses_loopback=False, uses_random_seed_loopback=False,
save_grid=save_grid,
sort_samples=sampler_name, sampler_name=sampler_name, ddim_eta=ddim_eta, n_iter=n_iter,
batch_size=batch_size, i=iter, save_individual_images=save_individual_images,
model_name="Stable Diffusion v1.5")
output_images.append(img)
# update image on the UI so we can see the progress
if "preview_image" in st.session_state:
st.session_state["preview_image"].image(img)
if "progress_bar_text" in st.session_state:
st.session_state["progress_bar_text"].empty()
#if len(results) > 1:
#final_filename = f"{iter}_{filename}"
#img.save(final_filename)
#logger.info(f"Saved {final_filename}")
else:
if "progress_bar_text" in st.session_state:
st.session_state["progress_bar_text"].error(submit_req.text)
logger.error(submit_req.text)
mem_max_used, mem_total = mem_mon.read_and_stop()
time_diff = time.time()-start_time
info = f"""
{prompt}
Steps: {steps}, Sampler: {sampler_name}, CFG scale: {cfg_scale}, Seed: {seed}{', GFPGAN' if use_GFPGAN else ''}{', '+realesrgan_model_name if use_RealESRGAN else ''}
{', Prompt Matrix Mode.' if prompt_matrix else ''}""".strip()
stats = f'''
Took { round(time_diff, 2) }s total ({ round(time_diff/(len(all_prompts)),2) }s per image)
Peak memory usage: { -(mem_max_used // -1_048_576) } MiB / { -(mem_total // -1_048_576) } MiB / { round(mem_max_used/mem_total*100, 3) }%'''
for comment in comments:
info += "\n\n" + comment
#mem_mon.stop()
#del mem_mon
torch_gc()
return output_images, seed, info, stats
#
@logger.catch(reraise=True)
def txt2img(prompt: str, ddim_steps: int, sampler_name: str, n_iter: int, batch_size: int, cfg_scale: float, seed: Union[int, str, None],
height: int, width: int, separate_prompts:bool = False, normalize_prompt_weights:bool = True,
save_individual_images: bool = True, save_grid: bool = True, group_by_prompt: bool = True,
save_as_jpg: bool = True, use_GFPGAN: bool = True, GFPGAN_model: str = 'GFPGANv1.3', use_RealESRGAN: bool = False,
RealESRGAN_model: str = "RealESRGAN_x4plus_anime_6B", use_LDSR: bool = True, LDSR_model: str = "model",
fp = None, variant_amount: float = None,
variant_seed: int = None, ddim_eta:float = 0.0, write_info_files:bool = True):
RealESRGAN_model: str = "RealESRGAN_x4plus_anime_6B", use_LDSR: bool = True, LDSR_model: str = "model",
fp = None, variant_amount: float = 0.0,
variant_seed: int = None, ddim_eta:float = 0.0, write_info_files:bool = True,
use_stable_horde: bool = False, stable_horde_key:str = "0000000000"):
outpath = st.session_state['defaults'].general.outdir_txt2img
seed = seed_to_int(seed)
if sampler_name == 'PLMS':
sampler = PLMSSampler(server_state["model"])
elif sampler_name == 'DDIM':
sampler = DDIMSampler(server_state["model"])
elif sampler_name == 'k_dpm_2_a':
sampler = KDiffusionSampler(server_state["model"],'dpm_2_ancestral')
elif sampler_name == 'k_dpm_2':
sampler = KDiffusionSampler(server_state["model"],'dpm_2')
elif sampler_name == 'k_euler_a':
sampler = KDiffusionSampler(server_state["model"],'euler_ancestral')
elif sampler_name == 'k_euler':
sampler = KDiffusionSampler(server_state["model"],'euler')
elif sampler_name == 'k_heun':
sampler = KDiffusionSampler(server_state["model"],'heun')
elif sampler_name == 'k_lms':
sampler = KDiffusionSampler(server_state["model"],'lms')
if not use_stable_horde:
if sampler_name == 'PLMS':
sampler = PLMSSampler(server_state["model"])
elif sampler_name == 'DDIM':
sampler = DDIMSampler(server_state["model"])
elif sampler_name == 'k_dpm_2_a':
sampler = KDiffusionSampler(server_state["model"],'dpm_2_ancestral')
elif sampler_name == 'k_dpm_2':
sampler = KDiffusionSampler(server_state["model"],'dpm_2')
elif sampler_name == 'k_euler_a':
sampler = KDiffusionSampler(server_state["model"],'euler_ancestral')
elif sampler_name == 'k_euler':
sampler = KDiffusionSampler(server_state["model"],'euler')
elif sampler_name == 'k_heun':
sampler = KDiffusionSampler(server_state["model"],'heun')
elif sampler_name == 'k_lms':
sampler = KDiffusionSampler(server_state["model"],'lms')
else:
raise Exception("Unknown sampler: " + sampler_name)
def init():
pass
def sample(init_data, x, conditioning, unconditional_conditioning, sampler_name):
samples_ddim, _ = sampler.sample(S=ddim_steps, conditioning=conditioning, batch_size=int(x.shape[0]), shape=x[0].shape, verbose=False, unconditional_guidance_scale=cfg_scale,
unconditional_conditioning=unconditional_conditioning, eta=ddim_eta, x_T=x,
img_callback=generation_callback if not server_state["bridge"] else None,
log_every_t=int(st.session_state.update_preview_frequency if not server_state["bridge"] else 100))
return samples_ddim
if use_stable_horde:
output_images, seed, info, stats = stable_horde(
prompt=prompt,
seed=seed,
outpath=outpath,
sampler_name=sampler_name,
save_grid=save_grid,
batch_size=batch_size,
n_iter=n_iter,
steps=ddim_steps,
cfg_scale=cfg_scale,
width=width,
height=height,
prompt_matrix=separate_prompts,
use_GFPGAN=use_GFPGAN,
GFPGAN_model=GFPGAN_model,
use_RealESRGAN=use_RealESRGAN,
realesrgan_model_name=RealESRGAN_model,
use_LDSR=use_LDSR,
LDSR_model_name=LDSR_model,
ddim_eta=ddim_eta,
normalize_prompt_weights=normalize_prompt_weights,
save_individual_images=save_individual_images,
sort_samples=group_by_prompt,
write_info_files=write_info_files,
jpg_sample=save_as_jpg,
variant_amount=variant_amount,
variant_seed=variant_seed,
api_key=stable_horde_key
)
else:
raise Exception("Unknown sampler: " + sampler_name)
def init():
pass
#try:
output_images, seed, info, stats = process_images(
outpath=outpath,
func_init=init,
func_sample=sample,
prompt=prompt,
seed=seed,
sampler_name=sampler_name,
save_grid=save_grid,
batch_size=batch_size,
n_iter=n_iter,
steps=ddim_steps,
cfg_scale=cfg_scale,
width=width,
height=height,
prompt_matrix=separate_prompts,
use_GFPGAN=use_GFPGAN,
GFPGAN_model=GFPGAN_model,
use_RealESRGAN=use_RealESRGAN,
realesrgan_model_name=RealESRGAN_model,
use_LDSR=use_LDSR,
LDSR_model_name=LDSR_model,
ddim_eta=ddim_eta,
normalize_prompt_weights=normalize_prompt_weights,
save_individual_images=save_individual_images,
sort_samples=group_by_prompt,
write_info_files=write_info_files,
jpg_sample=save_as_jpg,
variant_amount=variant_amount,
variant_seed=variant_seed,
)
def sample(init_data, x, conditioning, unconditional_conditioning, sampler_name):
samples_ddim, _ = sampler.sample(S=ddim_steps, conditioning=conditioning, batch_size=int(x.shape[0]), shape=x[0].shape, verbose=False, unconditional_guidance_scale=cfg_scale,
unconditional_conditioning=unconditional_conditioning, eta=ddim_eta, x_T=x, img_callback=generation_callback,
log_every_t=int(st.session_state.update_preview_frequency))
return samples_ddim
#try:
output_images, seed, info, stats = process_images(
outpath=outpath,
func_init=init,
func_sample=sample,
prompt=prompt,
seed=seed,
sampler_name=sampler_name,
save_grid=save_grid,
batch_size=batch_size,
n_iter=n_iter,
steps=ddim_steps,
cfg_scale=cfg_scale,
width=width,
height=height,
prompt_matrix=separate_prompts,
use_GFPGAN=st.session_state["use_GFPGAN"],
GFPGAN_model=st.session_state["GFPGAN_model"],
use_RealESRGAN=st.session_state["use_RealESRGAN"],
realesrgan_model_name=RealESRGAN_model,
use_LDSR=st.session_state["use_LDSR"],
LDSR_model_name=LDSR_model,
ddim_eta=ddim_eta,
normalize_prompt_weights=normalize_prompt_weights,
save_individual_images=save_individual_images,
sort_samples=group_by_prompt,
write_info_files=write_info_files,
jpg_sample=save_as_jpg,
variant_amount=variant_amount,
variant_seed=variant_seed,
)
del sampler
del sampler
return output_images, seed, info, stats
@ -173,8 +394,9 @@ def txt2img(prompt: str, ddim_steps: int, sampler_name: str, n_iter: int, batch_
#err_msg = f'CRASHED:<br><textarea rows="5" style="color:white;background: black;width: -webkit-fill-available;font-family: monospace;font-size: small;font-weight: bold;">{str(e)}</textarea><br><br>Please wait while the program restarts.'
#stats = err_msg
#return [], seed, 'err', stats
#
@logger.catch(reraise=True)
def layout():
with st.form("txt2img-inputs"):
st.session_state["generation_mode"] = "txt2img"
@ -183,20 +405,22 @@ def layout():
with input_col1:
#prompt = st.text_area("Input Text","")
prompt = st.text_input("Input Text","", placeholder="A corgi wearing a top hat as an oil painting.")
placeholder = "A corgi wearing a top hat as an oil painting."
prompt = st.text_area("Input Text","", placeholder=placeholder, height=54)
sygil_suggestions.suggestion_area(placeholder)
# creating the page layout using columns
col1, col2, col3 = st.columns([1,2,1], gap="large")
col1, col2, col3 = st.columns([2,5,2], gap="large")
with col1:
width = st.slider("Width:", min_value=st.session_state['defaults'].txt2img.width.min_value, max_value=st.session_state['defaults'].txt2img.width.max_value,
value=st.session_state['defaults'].txt2img.width.value, step=st.session_state['defaults'].txt2img.width.step)
height = st.slider("Height:", min_value=st.session_state['defaults'].txt2img.height.min_value, max_value=st.session_state['defaults'].txt2img.height.max_value,
value=st.session_state['defaults'].txt2img.height.value, step=st.session_state['defaults'].txt2img.height.step)
cfg_scale = st.slider("CFG (Classifier Free Guidance Scale):", min_value=st.session_state['defaults'].txt2img.cfg_scale.min_value,
max_value=st.session_state['defaults'].txt2img.cfg_scale.max_value,
cfg_scale = st.number_input("CFG (Classifier Free Guidance Scale):", min_value=st.session_state['defaults'].txt2img.cfg_scale.min_value,
value=st.session_state['defaults'].txt2img.cfg_scale.value, step=st.session_state['defaults'].txt2img.cfg_scale.step,
help="How strongly the image should follow the prompt.")
seed = st.text_input("Seed:", value=st.session_state['defaults'].txt2img.seed, help=" The seed to use, if left blank a random seed will be generated.")
with st.expander("Batch Options"):
@ -209,22 +433,24 @@ def layout():
#help="How many images are at once in a batch.\
#It increases the VRAM usage a lot but if you have enough VRAM it can reduce the time it takes to finish generation as more images are generated at once.\
#Default: 1")
st.session_state["batch_count"] = int(st.text_input("Batch count.", value=st.session_state['defaults'].txt2img.batch_count.value,
help="How many iterations or batches of images to generate in total."))
st.session_state["batch_size"] = int(st.text_input("Batch size", value=st.session_state.defaults.txt2img.batch_size.value,
st.session_state["batch_count"] = st.number_input("Batch count.", value=st.session_state['defaults'].txt2img.batch_count.value,
help="How many iterations or batches of images to generate in total.")
st.session_state["batch_size"] = st.number_input("Batch size", value=st.session_state.defaults.txt2img.batch_size.value,
help="How many images are at once in a batch.\
It increases the VRAM usage a lot but if you have enough VRAM it can reduce the time it takes \
to finish generation as more images are generated at once.\
Default: 1") )
Default: 1")
with st.expander("Preview Settings"):
st.session_state["update_preview"] = st.session_state["defaults"].general.update_preview
st.session_state["update_preview_frequency"] = st.text_input("Update Image Preview Frequency", value=st.session_state['defaults'].txt2img.update_preview_frequency,
help="Frequency in steps at which the the preview image is updated. By default the frequency \
is set to 10 step.")
st.session_state["update_preview_frequency"] = st.number_input("Update Image Preview Frequency",
min_value=0,
value=st.session_state['defaults'].txt2img.update_preview_frequency,
help="Frequency in steps at which the the preview image is updated. By default the frequency \
is set to 10 step.")
with col2:
preview_tab, gallery_tab = st.tabs(["Preview", "Gallery"])
@ -238,18 +464,18 @@ def layout():
# create an empty container for the image, progress bar, etc so we can update it later and use session_state to hold them globally.
st.session_state["preview_image"] = st.empty()
st.session_state["progress_bar_text"] = st.empty()
st.session_state["progress_bar_text"].info("Nothing but crickets here, try generating something first.")
st.session_state["progress_bar"] = st.empty()
message = st.empty()
with gallery_tab:
st.session_state["gallery"] = st.empty()
st.session_state["gallery"].info("Nothing but crickets here, try generating something first.")
st.session_state["gallery"] = st.empty()
#st.session_state["gallery"].info("Nothing but crickets here, try generating something first.")
with col3:
# If we have custom models available on the "models/custom"
@ -262,47 +488,52 @@ def layout():
help="Select the model you want to use. This option is only available if you have custom models \
on your 'models/custom' folder. The model name that will be shown here is the same as the name\
the file for the model has on said folder, it is recommended to give the .ckpt file a name that \
will make it easier for you to distinguish it from other models. Default: Stable Diffusion v1.4")
will make it easier for you to distinguish it from other models. Default: Stable Diffusion v1.5")
st.session_state.sampling_steps = st.slider("Sampling Steps", value=st.session_state.defaults.txt2img.sampling_steps.value,
min_value=st.session_state.defaults.txt2img.sampling_steps.min_value,
max_value=st.session_state['defaults'].txt2img.sampling_steps.max_value,
step=st.session_state['defaults'].txt2img.sampling_steps.step)
st.session_state.sampling_steps = st.number_input("Sampling Steps", value=st.session_state.defaults.txt2img.sampling_steps.value,
min_value=st.session_state.defaults.txt2img.sampling_steps.min_value,
step=st.session_state['defaults'].txt2img.sampling_steps.step,
help="Set the default number of sampling steps to use. Default is: 30 (with k_euler)")
sampler_name_list = ["k_lms", "k_euler", "k_euler_a", "k_dpm_2", "k_dpm_2_a", "k_heun", "PLMS", "DDIM"]
sampler_name = st.selectbox("Sampling method", sampler_name_list,
index=sampler_name_list.index(st.session_state['defaults'].txt2img.default_sampler), help="Sampling method to use. Default: k_euler")
with st.expander("Advanced"):
with st.expander("Stable Horde"):
use_stable_horde = st.checkbox("Use Stable Horde", value=False, help="Use the Stable Horde to generate images. More info can be found at https://stablehorde.net/")
stable_horde_key = st.text_input("Stable Horde Api Key", value=st.session_state['defaults'].general.stable_horde_api, type="password",
help="Optional Api Key used for the Stable Horde Bridge, if no api key is added the horde will be used anonymously.")
with st.expander("Output Settings"):
separate_prompts = st.checkbox("Create Prompt Matrix.", value=st.session_state['defaults'].txt2img.separate_prompts,
help="Separate multiple prompts using the `|` character, and get all combinations of them.")
normalize_prompt_weights = st.checkbox("Normalize Prompt Weights.", value=st.session_state['defaults'].txt2img.normalize_prompt_weights,
help="Ensure the sum of all weights add up to 1.0")
save_individual_images = st.checkbox("Save individual images.", value=st.session_state['defaults'].txt2img.save_individual_images,
help="Save each image generated before any filter or enhancement is applied.")
save_grid = st.checkbox("Save grid",value=st.session_state['defaults'].txt2img.save_grid, help="Save a grid with all the images generated into a single image.")
group_by_prompt = st.checkbox("Group results by prompt", value=st.session_state['defaults'].txt2img.group_by_prompt,
help="Saves all the images with the same prompt into the same folder. When using a prompt matrix each prompt combination will have its own folder.")
write_info_files = st.checkbox("Write Info file", value=st.session_state['defaults'].txt2img.write_info_files,
help="Save a file next to the image with informartion about the generation.")
save_as_jpg = st.checkbox("Save samples as jpg", value=st.session_state['defaults'].txt2img.save_as_jpg, help="Saves the images as jpg instead of png.")
# check if GFPGAN, RealESRGAN and LDSR are available.
if "GFPGAN_available" not in st.session_state:
GFPGAN_available()
if "RealESRGAN_available" not in st.session_state:
RealESRGAN_available()
if "LDSR_available" not in st.session_state:
LDSR_available()
#if "GFPGAN_available" not in st.session_state:
GFPGAN_available()
#if "RealESRGAN_available" not in st.session_state:
RealESRGAN_available()
#if "LDSR_available" not in st.session_state:
LDSR_available()
if st.session_state["GFPGAN_available"] or st.session_state["RealESRGAN_available"] or st.session_state["LDSR_available"]:
with st.expander("Post-Processing"):
face_restoration_tab, upscaling_tab = st.tabs(["Face Restoration", "Upscaling"])
@ -316,45 +547,47 @@ def layout():
help="Uses the GFPGAN model to improve faces after the generation.\
This greatly improve the quality and consistency of faces but uses\
extra VRAM. Disable if you need the extra VRAM.")
st.session_state["GFPGAN_model"] = st.selectbox("GFPGAN model", st.session_state["GFPGAN_models"],
index=st.session_state["GFPGAN_models"].index(st.session_state['defaults'].general.GFPGAN_model))
index=st.session_state["GFPGAN_models"].index(st.session_state['defaults'].general.GFPGAN_model))
#st.session_state["GFPGAN_strenght"] = st.slider("Effect Strenght", min_value=1, max_value=100, value=1, step=1, help='')
else:
st.session_state["use_GFPGAN"] = False
st.session_state["use_GFPGAN"] = False
with upscaling_tab:
st.session_state['use_upscaling'] = st.checkbox("Use Upscaling", value=st.session_state['defaults'].txt2img.use_upscaling)
# RealESRGAN and LDSR used for upscaling.
# RealESRGAN and LDSR used for upscaling.
if st.session_state["RealESRGAN_available"] or st.session_state["LDSR_available"]:
upscaling_method_list = []
if st.session_state["RealESRGAN_available"]:
upscaling_method_list.append("RealESRGAN")
if st.session_state["LDSR_available"]:
upscaling_method_list.append("LDSR")
#print (st.session_state["RealESRGAN_available"])
st.session_state["upscaling_method"] = st.selectbox("Upscaling Method", upscaling_method_list,
index=upscaling_method_list.index(str(st.session_state['defaults'].general.upscaling_method)))
index=upscaling_method_list.index(st.session_state['defaults'].general.upscaling_method)
if st.session_state['defaults'].general.upscaling_method in upscaling_method_list
else 0)
if st.session_state["RealESRGAN_available"]:
with st.expander("RealESRGAN"):
if st.session_state["upscaling_method"] == "RealESRGAN" and st.session_state['use_upscaling']:
st.session_state["use_RealESRGAN"] = True
else:
st.session_state["use_RealESRGAN"] = False
st.session_state["RealESRGAN_model"] = st.selectbox("RealESRGAN model", st.session_state["RealESRGAN_models"],
index=st.session_state["RealESRGAN_models"].index(st.session_state['defaults'].general.RealESRGAN_model))
index=st.session_state["RealESRGAN_models"].index(st.session_state['defaults'].general.RealESRGAN_model))
else:
st.session_state["use_RealESRGAN"] = False
st.session_state["RealESRGAN_model"] = "RealESRGAN_x4plus"
#
if st.session_state["LDSR_available"]:
with st.expander("LDSR"):
@ -362,27 +595,27 @@ def layout():
st.session_state["use_LDSR"] = True
else:
st.session_state["use_LDSR"] = False
st.session_state["LDSR_model"] = st.selectbox("LDSR model", st.session_state["LDSR_models"],
index=st.session_state["LDSR_models"].index(st.session_state['defaults'].general.LDSR_model))
st.session_state["ldsr_sampling_steps"] = int(st.text_input("Sampling Steps", value=st.session_state['defaults'].txt2img.LDSR_config.sampling_steps,
help=""))
st.session_state["preDownScale"] = int(st.text_input("PreDownScale", value=st.session_state['defaults'].txt2img.LDSR_config.preDownScale,
help=""))
st.session_state["postDownScale"] = int(st.text_input("postDownScale", value=st.session_state['defaults'].txt2img.LDSR_config.postDownScale,
help=""))
index=st.session_state["LDSR_models"].index(st.session_state['defaults'].general.LDSR_model))
st.session_state["ldsr_sampling_steps"] = st.number_input("Sampling Steps", value=st.session_state['defaults'].txt2img.LDSR_config.sampling_steps,
help="")
st.session_state["preDownScale"] = st.number_input("PreDownScale", value=st.session_state['defaults'].txt2img.LDSR_config.preDownScale,
help="")
st.session_state["postDownScale"] = st.number_input("postDownScale", value=st.session_state['defaults'].txt2img.LDSR_config.postDownScale,
help="")
downsample_method_list = ['Nearest', 'Lanczos']
st.session_state["downsample_method"] = st.selectbox("Downsample Method", downsample_method_list,
index=downsample_method_list.index(st.session_state['defaults'].txt2img.LDSR_config.downsample_method))
else:
st.session_state["use_LDSR"] = False
st.session_state["LDSR_model"] = "model"
st.session_state["LDSR_model"] = "model"
with st.expander("Variant"):
variant_amount = st.slider("Variant Amount:", value=st.session_state['defaults'].txt2img.variant_amount.value,
min_value=st.session_state['defaults'].txt2img.variant_amount.min_value, max_value=st.session_state['defaults'].txt2img.variant_amount.max_value,
@ -398,68 +631,36 @@ def layout():
generate_button = generate_col1.form_submit_button("Generate")
#
if generate_button:
with col2:
with hc.HyLoader('Loading Models...', hc.Loaders.standard_loaders,index=[0]):
load_models(use_LDSR=st.session_state["use_LDSR"], LDSR_model=st.session_state["LDSR_model"],
use_GFPGAN=st.session_state["use_GFPGAN"], GFPGAN_model=st.session_state["GFPGAN_model"] ,
use_RealESRGAN=st.session_state["use_RealESRGAN"], RealESRGAN_model=st.session_state["RealESRGAN_model"],
CustomModel_available=server_state["CustomModel_available"], custom_model=st.session_state["custom_model"])
if generate_button:
with col2:
if not use_stable_horde:
with hc.HyLoader('Loading Models...', hc.Loaders.standard_loaders,index=[0]):
load_models(use_LDSR=st.session_state["use_LDSR"], LDSR_model=st.session_state["LDSR_model"],
use_GFPGAN=st.session_state["use_GFPGAN"], GFPGAN_model=st.session_state["GFPGAN_model"] ,
use_RealESRGAN=st.session_state["use_RealESRGAN"], RealESRGAN_model=st.session_state["RealESRGAN_model"],
CustomModel_available=server_state["CustomModel_available"], custom_model=st.session_state["custom_model"])
#print(st.session_state['use_RealESRGAN'])
#print(st.session_state['use_LDSR'])
#try:
#
output_images, seeds, info, stats = txt2img(prompt, st.session_state.sampling_steps, sampler_name, st.session_state["batch_count"], st.session_state["batch_size"],
cfg_scale, seed, height, width, separate_prompts, normalize_prompt_weights, save_individual_images,
save_grid, group_by_prompt, save_as_jpg, st.session_state["use_GFPGAN"], st.session_state['GFPGAN_model'],
save_grid, group_by_prompt, save_as_jpg, st.session_state["use_GFPGAN"], st.session_state['GFPGAN_model'],
use_RealESRGAN=st.session_state["use_RealESRGAN"], RealESRGAN_model=st.session_state["RealESRGAN_model"],
use_LDSR=st.session_state["use_LDSR"], LDSR_model=st.session_state["LDSR_model"],
variant_amount=variant_amount, variant_seed=variant_seed, write_info_files=write_info_files)
use_LDSR=st.session_state["use_LDSR"], LDSR_model=st.session_state["LDSR_model"],
variant_amount=variant_amount, variant_seed=variant_seed, write_info_files=write_info_files,
use_stable_horde=use_stable_horde, stable_horde_key=stable_horde_key)
message.success('Render Complete: ' + info + '; Stats: ' + stats, icon="")
#history_tab,col1,col2,col3,PlaceHolder,col1_cont,col2_cont,col3_cont = st.session_state['historyTab']
#if 'latestImages' in st.session_state:
#for i in output_images:
##push the new image to the list of latest images and remove the oldest one
##remove the last index from the list\
#st.session_state['latestImages'].pop()
##add the new image to the start of the list
#st.session_state['latestImages'].insert(0, i)
#PlaceHolder.empty()
#with PlaceHolder.container():
#col1, col2, col3 = st.columns(3)
#col1_cont = st.container()
#col2_cont = st.container()
#col3_cont = st.container()
#images = st.session_state['latestImages']
#with col1_cont:
#with col1:
#[st.image(images[index]) for index in [0, 3, 6] if index < len(images)]
#with col2_cont:
#with col2:
#[st.image(images[index]) for index in [1, 4, 7] if index < len(images)]
#with col3_cont:
#with col3:
#[st.image(images[index]) for index in [2, 5, 8] if index < len(images)]
#historyGallery = st.empty()
## check if output_images length is the same as seeds length
#with gallery_tab:
#st.markdown(createHTMLGallery(output_images,seeds), unsafe_allow_html=True)
#st.session_state['historyTab'] = [history_tab,col1,col2,col3,PlaceHolder,col1_cont,col2_cont,col3_cont]
with gallery_tab:
print(seeds)
logger.info(seeds)
st.session_state["gallery"].text = ""
sdGallery(output_images)
#except (StopException, KeyError):
#print(f"Received Streamlit StopException")

File diff suppressed because it is too large Load Diff

View File

@ -1,6 +1,6 @@
# This file is part of stable-diffusion-webui (https://github.com/sd-webui/stable-diffusion-webui/).
# This file is part of sygil-webui (https://github.com/Sygil-Dev/sygil-webui/).
# Copyright 2022 sd-webui team.
# Copyright 2022 Sygil-Dev team.
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
@ -2783,22 +2783,33 @@ if __name__ == '__main__':
if opt.bridge:
try:
import bridgeData as cd
except:
except ModuleNotFoundError as e:
logger.warning("No bridgeData found. Falling back to default where no CLI args are set.")
logger.warning(str(e))
except SyntaxError as e:
logger.warning("bridgeData found, but is malformed. Falling back to default where no CLI args are set.")
logger.warning(str(e))
except Exception as e:
logger.warning("No bridgeData found, use default where no CLI args are set")
class temp(object):
def __init__(self):
random.seed()
self.horde_url = "https://stablehorde.net"
# Give a cool name to your instance
self.horde_name = f"Automated Instance #{random.randint(-100000000, 100000000)}"
# The api_key identifies a unique user in the horde
self.horde_api_key = "0000000000"
# Put other users whose prompts you want to prioritize.
# The owner's username is always included so you don't need to add it here, unless you want it to have lower priority than another user
self.horde_priority_usernames = []
self.horde_max_power = 8
self.nsfw = True
cd = temp()
logger.warning(str(e))
finally:
try: # check if cd exists (i.e. bridgeData loaded properly)
cd
except: # if not, create defaults
class temp(object):
def __init__(self):
random.seed()
self.horde_url = "https://stablehorde.net"
# Give a cool name to your instance
self.horde_name = f"Automated Instance #{random.randint(-100000000, 100000000)}"
# The api_key identifies a unique user in the horde
self.horde_api_key = "0000000000"
# Put other users whose prompts you want to prioritize.
# The owner's username is always included so you don't need to add it here, unless you want it to have lower priority than another user
self.horde_priority_usernames = []
self.horde_max_power = 8
self.nsfw = True
cd = temp()
horde_api_key = opt.horde_api_key if opt.horde_api_key else cd.horde_api_key
horde_name = opt.horde_name if opt.horde_name else cd.horde_name
horde_url = opt.horde_url if opt.horde_url else cd.horde_url

View File

@ -1,6 +1,6 @@
# This file is part of stable-diffusion-webui (https://github.com/sd-webui/stable-diffusion-webui/).
# This file is part of sygil-webui (https://github.com/Sygil-Dev/sygil-webui/).
# Copyright 2022 sd-webui team.
# Copyright 2022 Sygil-Dev team.
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
@ -12,21 +12,22 @@
# GNU Affero General Public License for more details.
# You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
# along with this program. If not, see <http://www.gnu.org/licenses/>.
# base webui import and utils.
#import streamlit as st
# We import hydralit like this to replace the previous stuff
# we had with native streamlit as it lets ur replace things 1:1
#import hydralit as st
#import hydralit as st
import collections.abc
from sd_utils import *
# streamlit imports
import streamlit_nested_layout
#streamlit components section
from st_on_hover_tabs import on_hover_tabs
#from st_on_hover_tabs import on_hover_tabs
from streamlit_server_state import server_state, server_state_lock
#other imports
@ -35,38 +36,55 @@ import warnings
import os, toml
import k_diffusion as K
from omegaconf import OmegaConf
import argparse
if not "defaults" in st.session_state:
st.session_state["defaults"] = {}
st.session_state["defaults"] = OmegaConf.load("configs/webui/webui_streamlit.yaml")
if (os.path.exists("configs/webui/userconfig_streamlit.yaml")):
user_defaults = OmegaConf.load("configs/webui/userconfig_streamlit.yaml")
st.session_state["defaults"] = OmegaConf.merge(st.session_state["defaults"], user_defaults)
else:
OmegaConf.save(config=st.session_state.defaults, f="configs/webui/userconfig_streamlit.yaml")
loaded = OmegaConf.load("configs/webui/userconfig_streamlit.yaml")
assert st.session_state.defaults == loaded
if (os.path.exists(".streamlit/config.toml")):
st.session_state["streamlit_config"] = toml.load(".streamlit/config.toml")
# import custom components
from custom_components import draggable_number_input
# end of imports
#---------------------------------------------------------------------------------------------------------------
load_configs()
help = """
A double dash (`--`) is used to separate streamlit arguments from app arguments.
As a result using "streamlit run webui_streamlit.py --headless"
will show the help for streamlit itself and not pass any argument to our app,
we need to use "streamlit run webui_streamlit.py -- --headless"
in order to pass a command argument to this app."""
parser = argparse.ArgumentParser(formatter_class=argparse.ArgumentDefaultsHelpFormatter)
parser.add_argument("--headless", action='store_true', help="Don't launch web server, util if you just want to run the stable horde bridge.", default=False)
parser.add_argument("--bridge", action='store_true', help="don't launch web server, but make this instance into a Horde bridge.", default=False)
parser.add_argument('--horde_api_key', action="store", required=False, type=str, help="The API key corresponding to the owner of this Horde instance")
parser.add_argument('--horde_name', action="store", required=False, type=str, help="The server name for the Horde. It will be shown to the world and there can be only one.")
parser.add_argument('--horde_url', action="store", required=False, type=str, help="The SH Horde URL. Where the bridge will pickup prompts and send the finished generations.")
parser.add_argument('--horde_priority_usernames',type=str, action='append', required=False, help="Usernames which get priority use in this horde instance. The owner's username is always in this list.")
parser.add_argument('--horde_max_power',type=int, required=False, help="How much power this instance has to generate pictures. Min: 2")
parser.add_argument('--horde_sfw', action='store_true', required=False, help="Set to true if you do not want this worker generating NSFW images.")
parser.add_argument('--horde_blacklist', nargs='+', required=False, help="List the words that you want to blacklist.")
parser.add_argument('--horde_censorlist', nargs='+', required=False, help="List the words that you want to censor.")
parser.add_argument('--horde_censor_nsfw', action='store_true', required=False, help="Set to true if you want this bridge worker to censor NSFW images.")
parser.add_argument('--horde_model', action='store', required=False, help="Which model to run on this horde.")
parser.add_argument('-v', '--verbosity', action='count', default=0, help="The default logging level is ERROR or higher. This value increases the amount of logging seen in your screen")
parser.add_argument('-q', '--quiet', action='count', default=0, help="The default logging level is ERROR or higher. This value decreases the amount of logging seen in your screen")
opt = parser.parse_args()
with server_state_lock["bridge"]:
server_state["bridge"] = opt.bridge
try:
# this silences the annoying "Some weights of the model checkpoint were not used when initializing..." message at start.
from transformers import logging
# this silences the annoying "Some weights of the model checkpoint were not used when initializing..." message at start.
from transformers import logging
logging.set_verbosity_error()
logging.set_verbosity_error()
except:
pass
pass
# remove some annoying deprecation warnings that show every now and then.
warnings.filterwarnings("ignore", category=DeprecationWarning)
warnings.filterwarnings("ignore", category=UserWarning)
warnings.filterwarnings("ignore", category=UserWarning)
# this should force GFPGAN and RealESRGAN onto the selected gpu as well
#os.environ["CUDA_DEVICE_ORDER"]="PCI_BUS_ID" # see issue #152
@ -74,104 +92,253 @@ warnings.filterwarnings("ignore", category=UserWarning)
# functions to load css locally OR remotely starts here. Options exist for future flexibility. Called as st.markdown with unsafe_allow_html as css injection
# TODO, maybe look into async loading the file especially for remote fetching
# TODO, maybe look into async loading the file especially for remote fetching
def local_css(file_name):
with open(file_name) as f:
st.markdown(f'<style>{f.read()}</style>', unsafe_allow_html=True)
with open(file_name) as f:
st.markdown(f'<style>{f.read()}</style>', unsafe_allow_html=True)
def remote_css(url):
st.markdown(f'<link href="{url}" rel="stylesheet">', unsafe_allow_html=True)
st.markdown(f'<link href="{url}" rel="stylesheet">', unsafe_allow_html=True)
def load_css(isLocal, nameOrURL):
if(isLocal):
local_css(nameOrURL)
else:
remote_css(nameOrURL)
if(isLocal):
local_css(nameOrURL)
else:
remote_css(nameOrURL)
@logger.catch(reraise=True)
def layout():
"""Layout functions to define all the streamlit layout here."""
st.set_page_config(page_title="Stable Diffusion Playground", layout="wide")
#app = st.HydraApp(title='Stable Diffusion WebUI', favicon="", sidebar_state="expanded",
#hide_streamlit_markers=False, allow_url_nav=True , clear_cross_app_sessions=False)
"""Layout functions to define all the streamlit layout here."""
if not st.session_state["defaults"].debug.enable_hydralit:
st.set_page_config(page_title="Stable Diffusion Playground", layout="wide", initial_sidebar_state="collapsed")
with st.empty():
# load css as an external file, function has an option to local or remote url. Potential use when running from cloud infra that might not have access to local path.
load_css(True, 'frontend/css/streamlit.main.css')
# check if the models exist on their respective folders
with server_state_lock["GFPGAN_available"]:
if os.path.exists(os.path.join(st.session_state["defaults"].general.GFPGAN_dir, f"{st.session_state['defaults'].general.GFPGAN_model}.pth")):
server_state["GFPGAN_available"] = True
else:
server_state["GFPGAN_available"] = False
#app = st.HydraApp(title='Stable Diffusion WebUI', favicon="", sidebar_state="expanded", layout="wide",
#hide_streamlit_markers=False, allow_url_nav=True , clear_cross_app_sessions=False)
with st.empty():
# load css as an external file, function has an option to local or remote url. Potential use when running from cloud infra that might not have access to local path.
load_css(True, 'frontend/css/streamlit.main.css')
#
# specify the primary menu definition
menu_data = [
{'id': 'Stable Diffusion', 'label': 'Stable Diffusion', 'icon': 'bi bi-grid-1x2-fill'},
{'id': 'Train','label':"Train", 'icon': "bi bi-lightbulb-fill", 'submenu':[
{'id': 'Textual Inversion', 'label': 'Textual Inversion', 'icon': 'bi bi-lightbulb-fill'},
{'id': 'Fine Tunning', 'label': 'Fine Tunning', 'icon': 'bi bi-lightbulb-fill'},
]},
{'id': 'Model Manager', 'label': 'Model Manager', 'icon': 'bi bi-cloud-arrow-down-fill'},
{'id': 'Tools','label':"Tools", 'icon': "bi bi-tools", 'submenu':[
{'id': 'API Server', 'label': 'API Server', 'icon': 'bi bi-server'},
{'id': 'Barfi/BaklavaJS', 'label': 'Barfi/BaklavaJS', 'icon': 'bi bi-diagram-3-fill'},
#{'id': 'API Server', 'label': 'API Server', 'icon': 'bi bi-server'},
]},
{'id': 'Settings', 'label': 'Settings', 'icon': 'bi bi-gear-fill'},
#{'icon': "fa-solid fa-radar",'label':"Dropdown1", 'submenu':[
# {'id':' subid11','icon': "fa fa-paperclip", 'label':"Sub-item 1"},{'id':'subid12','icon': "💀", 'label':"Sub-item 2"},{'id':'subid13','icon': "fa fa-database", 'label':"Sub-item 3"}]},
#{'icon': "far fa-chart-bar", 'label':"Chart"},#no tooltip message
#{'id':' Crazy return value 💀','icon': "💀", 'label':"Calendar"},
#{'icon': "fas fa-tachometer-alt", 'label':"Dashboard",'ttip':"I'm the Dashboard tooltip!"}, #can add a tooltip message
#{'icon': "far fa-copy", 'label':"Right End"},
#{'icon': "fa-solid fa-radar",'label':"Dropdown2", 'submenu':[{'label':"Sub-item 1", 'icon': "fa fa-meh"},{'label':"Sub-item 2"},{'icon':'🙉','label':"Sub-item 3",}]},
]
over_theme = {'txc_inactive': '#FFFFFF', "menu_background":'#000000'}
menu_id = hc.nav_bar(
menu_definition=menu_data,
#home_name='Home',
#login_name='Logout',
hide_streamlit_markers=False,
override_theme=over_theme,
sticky_nav=True,
sticky_mode='pinned',
)
# check if the models exist on their respective folders
with server_state_lock["GFPGAN_available"]:
if os.path.exists(os.path.join(st.session_state["defaults"].general.GFPGAN_dir, f"{st.session_state['defaults'].general.GFPGAN_model}.pth")):
server_state["GFPGAN_available"] = True
else:
server_state["GFPGAN_available"] = False
with server_state_lock["RealESRGAN_available"]:
if os.path.exists(os.path.join(st.session_state["defaults"].general.RealESRGAN_dir, f"{st.session_state['defaults'].general.RealESRGAN_model}.pth")):
server_state["RealESRGAN_available"] = True
else:
server_state["RealESRGAN_available"] = False
#with st.sidebar:
#page = on_hover_tabs(tabName=['Stable Diffusion', "Textual Inversion","Model Manager","Settings"],
#iconName=['dashboard','model_training' ,'cloud_download', 'settings'], default_choice=0)
# need to see how to get the icons to show for the hydralit option_bar
#page = hc.option_bar([{'icon':'grid-outline','label':'Stable Diffusion'}, {'label':"Textual Inversion"},
#{'label':"Model Manager"},{'label':"Settings"}],
#horizontal_orientation=False,
#override_theme={'txc_inactive': 'white','menu_background':'#111', 'stVerticalBlock': '#111','txc_active':'yellow','option_active':'blue'})
#
#if menu_id == "Home":
#st.info("Under Construction. :construction_worker:")
if menu_id == "Stable Diffusion":
# set the page url and title
#st.experimental_set_query_params(page='stable-diffusion')
try:
set_page_title("Stable Diffusion Playground")
except NameError:
st.experimental_rerun()
txt2img_tab, img2img_tab, txt2vid_tab, img2txt_tab, post_processing_tab, concept_library_tab = st.tabs(["Text-to-Image", "Image-to-Image",
#"Inpainting",
"Text-to-Video", "Image-To-Text",
"Post-Processing","Concept Library"])
#with home_tab:
#from home import layout
#layout()
with txt2img_tab:
from txt2img import layout
layout()
with img2img_tab:
from img2img import layout
layout()
#with inpainting_tab:
#from inpainting import layout
#layout()
with txt2vid_tab:
from txt2vid import layout
layout()
with img2txt_tab:
from img2txt import layout
layout()
with post_processing_tab:
from post_processing import layout
layout()
with concept_library_tab:
from sd_concept_library import layout
layout()
#
elif menu_id == 'Model Manager':
set_page_title("Model Manager - Stable Diffusion Playground")
from ModelManager import layout
layout()
elif menu_id == 'Textual Inversion':
from textual_inversion import layout
layout()
elif menu_id == 'Fine Tunning':
#from textual_inversion import layout
#layout()
st.info("Under Construction. :construction_worker:")
elif menu_id == 'API Server':
set_page_title("API Server - Stable Diffusion Playground")
from APIServer import layout
layout()
elif menu_id == 'Barfi/BaklavaJS':
set_page_title("Barfi/BaklavaJS - Stable Diffusion Playground")
from barfi_baklavajs import layout
layout()
elif menu_id == 'Settings':
set_page_title("Settings - Stable Diffusion Playground")
from Settings import layout
layout()
# calling dragable input component module at the end, so it works on all pages
draggable_number_input.load()
with server_state_lock["RealESRGAN_available"]:
if os.path.exists(os.path.join(st.session_state["defaults"].general.RealESRGAN_dir, f"{st.session_state['defaults'].general.RealESRGAN_model}.pth")):
server_state["RealESRGAN_available"] = True
else:
server_state["RealESRGAN_available"] = False
with st.sidebar:
tabs = on_hover_tabs(tabName=['Stable Diffusion', "Textual Inversion","Model Manager","Settings"],
iconName=['dashboard','model_training' ,'cloud_download', 'settings'], default_choice=0)
# need to see how to get the icons to show for the hydralit option_bar
#tabs = hc.option_bar([{'icon':'grid-outline','label':'Stable Diffusion'}, {'label':"Textual Inversion"},
#{'label':"Model Manager"},{'label':"Settings"}],
#horizontal_orientation=False,
#override_theme={'txc_inactive': 'white','menu_background':'#111', 'stVerticalBlock': '#111','txc_active':'yellow','option_active':'blue'})
if tabs =='Stable Diffusion':
# set the page url and title
st.experimental_set_query_params(page='stable-diffusion')
try:
set_page_title("Stable Diffusion Playground")
except NameError:
st.experimental_rerun()
txt2img_tab, img2img_tab, txt2vid_tab, img2txt_tab, concept_library_tab = st.tabs(["Text-to-Image", "Image-to-Image",
"Text-to-Video", "Image-To-Text",
"Concept Library"])
#with home_tab:
#from home import layout
#layout()
with txt2img_tab:
from txt2img import layout
layout()
with img2img_tab:
from img2img import layout
layout()
with txt2vid_tab:
from txt2vid import layout
layout()
with img2txt_tab:
from img2txt import layout
layout()
with concept_library_tab:
from sd_concept_library import layout
layout()
#
elif tabs == 'Model Manager':
set_page_title("Model Manager - Stable Diffusion Playground")
from ModelManager import layout
layout()
elif tabs == 'Textual Inversion':
from textual_inversion import layout
layout()
elif tabs == 'Settings':
set_page_title("Settings - Stable Diffusion Playground")
from Settings import layout
layout()
if __name__ == '__main__':
layout()
set_logger_verbosity(opt.verbosity)
quiesce_logger(opt.quiet)
if not opt.headless:
layout()
with server_state_lock["bridge"]:
if server_state["bridge"]:
try:
import bridgeData as cd
except ModuleNotFoundError as e:
logger.warning("No bridgeData found. Falling back to default where no CLI args are set.")
logger.debug(str(e))
except SyntaxError as e:
logger.warning("bridgeData found, but is malformed. Falling back to default where no CLI args are set.")
logger.debug(str(e))
except Exception as e:
logger.warning("No bridgeData found, use default where no CLI args are set")
logger.debug(str(e))
finally:
try: # check if cd exists (i.e. bridgeData loaded properly)
cd
except: # if not, create defaults
class temp(object):
def __init__(self):
random.seed()
self.horde_url = "https://stablehorde.net"
# Give a cool name to your instance
self.horde_name = f"Automated Instance #{random.randint(-100000000, 100000000)}"
# The api_key identifies a unique user in the horde
self.horde_api_key = "0000000000"
# Put other users whose prompts you want to prioritize.
# The owner's username is always included so you don't need to add it here, unless you want it to have lower priority than another user
self.horde_priority_usernames = []
self.horde_max_power = 8
self.nsfw = True
self.censor_nsfw = False
self.blacklist = []
self.censorlist = []
self.models_to_load = ["stable_diffusion"]
cd = temp()
horde_api_key = opt.horde_api_key if opt.horde_api_key else cd.horde_api_key
horde_name = opt.horde_name if opt.horde_name else cd.horde_name
horde_url = opt.horde_url if opt.horde_url else cd.horde_url
horde_priority_usernames = opt.horde_priority_usernames if opt.horde_priority_usernames else cd.horde_priority_usernames
horde_max_power = opt.horde_max_power if opt.horde_max_power else cd.horde_max_power
# Not used yet
horde_models = [opt.horde_model] if opt.horde_model else cd.models_to_load
try:
horde_nsfw = not opt.horde_sfw if opt.horde_sfw else cd.horde_nsfw
except AttributeError:
horde_nsfw = True
try:
horde_censor_nsfw = opt.horde_censor_nsfw if opt.horde_censor_nsfw else cd.horde_censor_nsfw
except AttributeError:
horde_censor_nsfw = False
try:
horde_blacklist = opt.horde_blacklist if opt.horde_blacklist else cd.horde_blacklist
except AttributeError:
horde_blacklist = []
try:
horde_censorlist = opt.horde_censorlist if opt.horde_censorlist else cd.horde_censorlist
except AttributeError:
horde_censorlist = []
if horde_max_power < 2:
horde_max_power = 2
horde_max_pixels = 64*64*8*horde_max_power
logger.info(f"Joining Horde with parameters: Server Name '{horde_name}'. Horde URL '{horde_url}'. Max Pixels {horde_max_pixels}")
try:
thread = threading.Thread(target=run_bridge(1, horde_api_key, horde_name, horde_url,
horde_priority_usernames, horde_max_pixels,
horde_nsfw, horde_censor_nsfw, horde_blacklist,
horde_censorlist), args=())
thread.daemon = True
thread.start()
#run_bridge(1, horde_api_key, horde_name, horde_url, horde_priority_usernames, horde_max_pixels, horde_nsfw, horde_censor_nsfw, horde_blacklist, horde_censorlist)
except KeyboardInterrupt:
print(f"Keyboard Interrupt Received. Ending Bridge")

View File

@ -1,7 +1,7 @@
from setuptools import setup, find_packages
setup(
name='sd-webui',
name='sygil-webui',
version='0.0.1',
description='',
packages=find_packages(),

15
streamlit_webview.py Normal file
View File

@ -0,0 +1,15 @@
import os, webview
from streamlit.web import bootstrap
from streamlit import config as _config
webview.create_window('Sygil', 'http://localhost:8501', width=1000, height=800, min_size=(500, 500))
webview.start()
dirname = os.path.dirname(__file__)
filename = os.path.join(dirname, 'scripts/webui_streamlit.py')
_config.set_option("server.headless", True)
args = []
#streamlit.cli.main_run(filename, args)
bootstrap.run(filename,'',args, flag_options={})

View File

@ -1,17 +1,17 @@
@echo off
:: This file is part of stable-diffusion-webui (https://github.com/sd-webui/stable-diffusion-webui/).
:: Copyright 2022 sd-webui team.
:: This file is part of sygil-webui (https://github.com/Sygil-Dev/sygil-webui/).
::
:: Copyright 2022 Sygil-Dev team.
:: This program is free software: you can redistribute it and/or modify
:: it under the terms of the GNU Affero General Public License as published by
:: the Free Software Foundation, either version 3 of the License, or
:: (at your option) any later version.
::
:: This program is distributed in the hope that it will be useful,
:: but WITHOUT ANY WARRANTY; without even the implied warranty of
:: MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
:: GNU Affero General Public License for more details.
::
:: You should have received a copy of the GNU Affero General Public License
:: along with this program. If not, see <http://www.gnu.org/licenses/>.
:: Run all commands using this script's directory as the working directory
@ -31,7 +31,11 @@ IF EXIST custom-conda-path.txt (
FOR /F %%i IN (custom-conda-path.txt) DO set v_custom_path=%%i
)
set v_paths=%ProgramData%\miniconda3
set INSTALL_ENV_DIR=%cd%\installer_files\env
set PATH=%INSTALL_ENV_DIR%;%INSTALL_ENV_DIR%\Library\bin;%INSTALL_ENV_DIR%\Scripts;%INSTALL_ENV_DIR%\Library\usr\bin;%PATH%
set v_paths=%INSTALL_ENV_DIR%
set v_paths=%v_paths%;%ProgramData%\miniconda3
set v_paths=%v_paths%;%USERPROFILE%\miniconda3
set v_paths=%v_paths%;%ProgramData%\anaconda3
set v_paths=%v_paths%;%USERPROFILE%\anaconda3
@ -58,20 +62,23 @@ IF "%v_conda_path%"=="" (
:CONDA_FOUND
echo Stashing local changes and pulling latest update...
git status --porcelain=1 -uno | findstr . && set "HasChanges=1" || set "HasChanges=0"
call git stash
call git pull
IF "%HasChanges%" == "0" GOTO SKIP_RESTORE
set /P restore="Do you want to restore changes you made before updating? (Y/N): "
IF /I "%restore%" == "N" (
echo Removing changes please wait...
echo Removing changes...
call git stash drop
echo Changes removed, press any key to continue...
pause >nul
echo Changes removed
) ELSE IF /I "%restore%" == "Y" (
echo Restoring changes, please wait...
echo Restoring changes...
call git stash pop --quiet
echo Changes restored, press any key to continue...
pause >nul
echo Changes restored
)
:SKIP_RESTORE
call "%v_conda_path%\Scripts\activate.bat"
for /f "delims=" %%a in ('git log -1 --format^="%%H" -- environment.yaml') DO set v_cur_hash=%%a
@ -95,12 +102,11 @@ call "%v_conda_path%\Scripts\activate.bat" "%v_conda_env_name%"
:PROMPT
set SETUPTOOLS_USE_DISTUTILS=stdlib
IF EXIST "models\ldm\stable-diffusion-v1\model.ckpt" (
set "PYTHONPATH=%~dp0"
python scripts\relauncher.py %*
IF EXIST "models\ldm\stable-diffusion-v1\Stable Diffusion v1.5.ckpt" (
python -m streamlit run scripts\webui_streamlit.py --theme.base dark
) ELSE (
echo Your model file does not exist! Place it in 'models\ldm\stable-diffusion-v1' with the name 'model.ckpt'.
pause
echo Your model file does not exist! Once the WebUI launches please visit the Model Manager page and download the models by using the Download button for each model.
python -m streamlit run scripts\webui_streamlit.py --theme.base dark
)
::cmd /k

View File

@ -1,7 +1,8 @@
#!/bin/bash -i
# This file is part of stable-diffusion-webui (https://github.com/sd-webui/stable-diffusion-webui/).
# Copyright 2022 sd-webui team.
# This file is part of sygil-webui (https://github.com/Sygil-Dev/sygil-webui/).
# Copyright 2022 Sygil-Dev team.
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
@ -23,6 +24,9 @@ ENV_MODIFIED=$(date -r $ENV_FILE "+%s")
ENV_MODIFED_FILE=".env_updated"
ENV_UPDATED=0
INSTALL_ENV_DIR="$(pwd)/../installer_files/env" # since linux-sd.sh clones the repo into a subfolder
if [ -e "$INSTALL_ENV_DIR" ]; then export PATH="$INSTALL_ENV_DIR/bin:$PATH"; fi
# Models used for upscaling
GFPGAN_MODEL="https://github.com/TencentARC/GFPGAN/releases/download/v1.3.0/GFPGANv1.3.pth"
LATENT_DIFFUSION_REPO="https://github.com/devilismyfriend/latent-diffusion.git"
@ -30,7 +34,7 @@ LSDR_CONFIG="https://heibox.uni-heidelberg.de/f/31a76b13ea27482981b4/?dl=1"
LSDR_MODEL="https://heibox.uni-heidelberg.de/f/578df07c8fc04ffbadf3/?dl=1"
REALESRGAN_MODEL="https://github.com/xinntao/Real-ESRGAN/releases/download/v0.1.0/RealESRGAN_x4plus.pth"
REALESRGAN_ANIME_MODEL="https://github.com/xinntao/Real-ESRGAN/releases/download/v0.2.2.4/RealESRGAN_x4plus_anime_6B.pth"
SD_CONCEPT_REPO="https://github.com/sd-webui/sd-concepts-library/archive/refs/heads/main.zip"
SD_CONCEPT_REPO="https://github.com/Sygil-Dev/sd-concepts-library/archive/refs/heads/main.zip"
if [[ -f $ENV_MODIFED_FILE ]]; then
@ -49,6 +53,11 @@ conda_env_setup () {
CUSTOM_CONDA_PATH=$(cat custom-conda-path.txt)
fi
# If a custom conda isn't specified, and the installer downloaded conda for the user, then use that
if [ -f "$INSTALL_ENV_DIR/etc/profile.d/conda.sh" ] && [ "$CUSTOM_CONDA_PATH" == "" ]; then
. "$INSTALL_ENV_DIR/etc/profile.d/conda.sh"
fi
# If custom path is set above, try to setup conda environment
if [ -f "${CUSTOM_CONDA_PATH}/etc/profile.d/conda.sh" ]; then
. "${CUSTOM_CONDA_PATH}/etc/profile.d/conda.sh"
@ -85,22 +94,6 @@ conda_env_activation () {
conda info | grep active
}
# Check to see if the SD model already exists, if not then it creates it and prompts the user to add the SD AI models to the repo directory
sd_model_loading () {
if [ -f "$DIRECTORY/models/ldm/stable-diffusion-v1/model.ckpt" ]; then
printf "AI Model already in place. Continuing...\n\n"
else
printf "\n\n########## MOVE MODEL FILE ##########\n\n"
printf "Please download the 1.4 AI Model from Huggingface (or another source) and place it inside of the stable-diffusion-webui folder\n\n"
read -p "Once you have sd-v1-4.ckpt in the project root, Press Enter...\n\n"
# Check to make sure checksum of models is the original one from HuggingFace and not a fake model set
printf "fe4efff1e174c627256e44ec2991ba279b3816e364b49f9be2abc0b3ff3f8556 sd-v1-4.ckpt" | sha256sum --check || exit 1
mv sd-v1-4.ckpt $DIRECTORY/models/ldm/stable-diffusion-v1/model.ckpt
rm -r ./Models
fi
}
# Checks to see if the upscaling models exist in their correct locations. If they do not they will be downloaded as required
post_processor_model_loading () {
# Check to see if GFPGAN has been added yet, if not it will download it and place it in the proper directory
@ -154,9 +147,16 @@ post_processor_model_loading () {
# Show the user a prompt asking them which version of the WebUI they wish to use, Streamlit or Gradio
launch_webui () {
# skip the prompt if --bridge command-line argument is detected
for arg in "$@"; do
if [ "$arg" == "--bridge" ]; then
python -u scripts/relauncher.py "$@"
return
fi
done
printf "\n\n########## LAUNCH USING STREAMLIT OR GRADIO? ##########\n\n"
printf "Do you wish to run the WebUI using the Gradio or StreamLit Interface?\n\n"
printf "Streamlit: \nHas A More Modern UI \nMore Features Planned \nWill Be The Main UI Going Forward \nCurrently In Active Development \nMissing Some Gradio Features\n\n"
printf "Streamlit: \nHas A More Modern UI \nMore Features Planned \nWill Be The Main UI Going Forward \nCurrently In Active Development \n\n"
printf "Gradio: \nCurrently Feature Complete \nUses An Older Interface Style \nWill Not Receive Major Updates\n\n"
printf "Which Version of the WebUI Interface do you wish to use?\n"
select yn in "Streamlit" "Gradio"; do
@ -173,9 +173,9 @@ start_initialization () {
sd_model_loading
post_processor_model_loading
conda_env_activation
if [ ! -e "models/ldm/stable-diffusion-v1/model.ckpt" ]; then
echo "Your model file does not exist! Place it in 'models/ldm/stable-diffusion-v1' with the name 'model.ckpt'."
exit 1
if [ ! -e "models/ldm/stable-diffusion-v1/*.ckpt" ]; then
echo "Your model file does not exist! Streamlit will handle this automatically, however Gradio still requires this file be placed manually. If you intend to use the Gradio interface, place it in 'models/ldm/stable-diffusion-v1' with the name 'model.ckpt'."
read -p "Once you have sd-v1-4.ckpt in the project root, if you are going to use Gradio, Press Enter...\n\n"
fi
launch_webui "$@"

View File

@ -1,17 +1,17 @@
@echo off
:: This file is part of stable-diffusion-webui (https://github.com/sd-webui/stable-diffusion-webui/).
::
:: Copyright 2022 sd-webui team.
:: This file is part of sygil-webui (https://github.com/Sygil-Dev/sygil-webui/).
:: Copyright 2022 Sygil-Dev team.
:: This program is free software: you can redistribute it and/or modify
:: it under the terms of the GNU Affero General Public License as published by
:: the Free Software Foundation, either version 3 of the License, or
:: (at your option) any later version.
::
:: This program is distributed in the hope that it will be useful,
:: but WITHOUT ANY WARRANTY; without even the implied warranty of
:: MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
:: GNU Affero General Public License for more details.
::
:: You should have received a copy of the GNU Affero General Public License
:: along with this program. If not, see <http://www.gnu.org/licenses/>.
:: Run all commands using this script's directory as the working directory
@ -58,20 +58,23 @@ IF "%v_conda_path%"=="" (
:CONDA_FOUND
echo Stashing local changes and pulling latest update...
git status --porcelain=1 -uno | findstr . && set "HasChanges=1" || set "HasChanges=0"
call git stash
call git pull
IF "%HasChanges%" == "0" GOTO SKIP_RESTORE
set /P restore="Do you want to restore changes you made before updating? (Y/N): "
IF /I "%restore%" == "N" (
echo Removing changes please wait...
echo Removing changes...
call git stash drop
echo Changes removed, press any key to continue...
pause >nul
echo Changes removed
) ELSE IF /I "%restore%" == "Y" (
echo Restoring changes, please wait...
echo Restoring changes...
call git stash pop --quiet
echo Changes restored, press any key to continue...
pause >nul
echo Changes restored
)
:SKIP_RESTORE
call "%v_conda_path%\Scripts\activate.bat"
for /f "delims=" %%a in ('git log -1 --format^="%%H" -- environment.yaml') DO set v_cur_hash=%%a
@ -96,7 +99,8 @@ call "%v_conda_path%\Scripts\activate.bat" "%v_conda_env_name%"
:PROMPT
set SETUPTOOLS_USE_DISTUTILS=stdlib
IF EXIST "models\ldm\stable-diffusion-v1\model.ckpt" (
python -m streamlit run scripts\webui_streamlit.py --theme.base dark
set "PYTHONPATH=%~dp0"
python scripts\relauncher.py %*
) ELSE (
echo Your model file does not exist! Place it in 'models\ldm\stable-diffusion-v1' with the name 'model.ckpt'.
pause