Merge branch 'dev' into master

This commit is contained in:
hlky 2022-10-29 06:47:36 +01:00 committed by GitHub
commit 091520bed0
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
89 changed files with 152135 additions and 155017 deletions

View File

@ -34,7 +34,6 @@ maxMessageSize = 200
enableWebsocketCompression = false enableWebsocketCompression = false
[browser] [browser]
serverAddress = "localhost"
gatherUsageStats = false gatherUsageStats = false
serverPort = 8501 serverPort = 8501

108
README.md
View File

@ -1,21 +1,21 @@
# <center>Web-based UI for Stable Diffusion</center> # <center>Web-based UI for Stable Diffusion</center>
## Created by [Sygil-Dev](https://github.com/Sygil-Dev) ## Created by [Sygil.Dev](https://github.com/sygil-dev)
## [Visit Sygil-Dev's Discord Server](https://discord.gg/gyXNe4NySY) [![Discord Server](https://user-images.githubusercontent.com/5977640/190528254-9b5b4423-47ee-4f24-b4f9-fd13fba37518.png)](https://discord.gg/gyXNe4NySY) ## [Join us at Sygil.Dev's Discord Server](https://discord.gg/gyXNe4NySY) [![Discord Server](https://user-images.githubusercontent.com/5977640/190528254-9b5b4423-47ee-4f24-b4f9-fd13fba37518.png)](https://discord.gg/gyXNe4NySY)
## Installation instructions for: ## Installation instructions for:
- **[Windows](https://Sygil-Dev.github.io/stable-diffusion-webui/docs/1.windows-installation.html)** - **[Windows](https://sygil-dev.github.io/sygil-webui/docs/1.windows-installation.html)**
- **[Linux](https://Sygil-Dev.github.io/stable-diffusion-webui/docs/2.linux-installation.html)** - **[Linux](https://sygil-dev.github.io/sygil-webui/docs/2.linux-installation.html)**
### Want to ask a question or request a feature? ### Want to ask a question or request a feature?
Come to our [Discord Server](https://discord.gg/gyXNe4NySY) or use [Discussions](https://github.com/Sygil-Dev/stable-diffusion-webui/discussions). Come to our [Discord Server](https://discord.gg/gyXNe4NySY) or use [Discussions](https://github.com/sygil-dev/sygil-webui/discussions).
## Documentation ## Documentation
[Documentation is located here](https://Sygil-Dev.github.io/stable-diffusion-webui/) [Documentation is located here](https://sygil-dev.github.io/sygil-webui/)
## Want to contribute? ## Want to contribute?
@ -29,23 +29,17 @@ Check the [Contribution Guide](CONTRIBUTING.md)
### Project Features: ### Project Features:
* Two great Web UI's to choose from: Streamlit or Gradio
* No more manually typing parameters, now all you have to do is write your prompt and adjust sliders
* Built-in image enhancers and upscalers, including GFPGAN and realESRGAN * Built-in image enhancers and upscalers, including GFPGAN and realESRGAN
* Generator Preview: See your image as its being made
* Run additional upscaling models on CPU to save VRAM * Run additional upscaling models on CPU to save VRAM
* Textual inversion 🔥: [info](https://textual-inversion.github.io/) - requires enabling, see [here](https://github.com/hlky/sd-enable-textual-inversion), script works as usual without it enabled * Textual inversion: [Reaserch Paper](https://textual-inversion.github.io/)
* Advanced img2img editor with Mask and crop capabilities * K-Diffusion Samplers: A great collection of samplers to use, including:
* Mask painting 🖌️: Powerful tool for re-generating only specific parts of an image you want to change (currently Gradio only) - `k_euler`
* More diffusion samplers 🔥🔥: A great collection of samplers to use, including:
- `k_euler` (Default)
- `k_lms` - `k_lms`
- `k_euler_a` - `k_euler_a`
- `k_dpm_2` - `k_dpm_2`
@ -54,35 +48,31 @@ Check the [Contribution Guide](CONTRIBUTING.md)
- `PLMS` - `PLMS`
- `DDIM` - `DDIM`
* Loopback: Automatically feed the last generated sample back into img2img * Loopback: Automatically feed the last generated sample back into img2img
* Prompt Weighting 🏋️: Adjust the strength of different terms in your prompt * Prompt Weighting & Negative Prompts: Gain more control over your creations
* Selectable GPU usage with `--gpu <id>` * Selectable GPU usage from Settings tab
* Memory Monitoring 🔥: Shows VRAM usage and generation time after outputting * Word Seeds: Use words instead of seed numbers
* Word Seeds 🔥: Use words instead of seed numbers * Automated Launcher: Activate conda and run Stable Diffusion with a single command
* CFG: Classifier free guidance scale, a feature for fine-tuning your output * Lighter on VRAM: 512x512 Text2Image & Image2Image tested working on 4GB (with *optimized* mode enabled in Settings)
* Automatic Launcher: Activate conda and run Stable Diffusion with a single command
* Lighter on VRAM: 512x512 Text2Image & Image2Image tested working on 4GB
* Prompt validation: If your prompt is too long, you will get a warning in the text output field * Prompt validation: If your prompt is too long, you will get a warning in the text output field
* Copy-paste generation parameters: A text output provides generation parameters in an easy to copy-paste form for easy sharing. * Sequential seeds for batches: If you use a seed of 1000 to generate two batches of two images each, four generated images will have seeds: `1000, 1001, 1002, 1003`.
* Correct seeds for batches: If you use a seed of 1000 to generate two batches of two images each, four generated images will have seeds: `1000, 1001, 1002, 1003`.
* Prompt matrix: Separate multiple prompts using the `|` character, and the system will produce an image for every combination of them. * Prompt matrix: Separate multiple prompts using the `|` character, and the system will produce an image for every combination of them.
* Loopback for Image2Image: A checkbox for img2img allowing to automatically feed output image as input for the next batch. Equivalent to saving output image, and replacing input image with it. * [Gradio] Advanced img2img editor with Mask and crop capabilities
# Stable Diffusion Web UI * [Gradio] Mask painting 🖌️: Powerful tool for re-generating only specific parts of an image you want to change (currently Gradio only)
A fully-integrated and easy way to work with Stable Diffusion right from a browser window. # SD WebUI
An easy way to work with Stable Diffusion right from your browser.
## Streamlit ## Streamlit
@ -90,30 +80,43 @@ A fully-integrated and easy way to work with Stable Diffusion right from a brows
**Features:** **Features:**
- Clean UI with an easy to use design, with support for widescreen displays. - Clean UI with an easy to use design, with support for widescreen displays
- Dynamic live preview of your generations - *Dynamic live preview* of your generations
- Easily customizable presets right from the WebUI (Coming Soon!) - Easily customizable defaults, right from the WebUI's Settings tab
- An integrated gallery to show the generations for a prompt or session (Coming soon!) - An integrated gallery to show the generations for a prompt
- Better optimization VRAM usage optimization, less errors for bigger generations. - *Optimized VRAM* usage for bigger generations or usage on lower end GPUs
- Text2Video - Generate video clips from text prompts right from the WEb UI (WIP) - *Text to Video:* Generate video clips from text prompts right from the WebUI (WIP)
- Concepts Library - Run custom embeddings others have made via textual inversion. - Image to Text: Use [CLIP Interrogator](https://github.com/pharmapsychotic/clip-interrogator) to interrogate an image and get a prompt that you can use to generate a similar image using Stable Diffusion.
- Actively being developed with new features being added and planned - Stay Tuned! - *Concepts Library:* Run custom embeddings others have made via textual inversion.
- Streamlit is now the new primary UI for the project moving forward. - Textual Inversion training: Train your own embeddings on any photo you want and use it on your prompt.
- *Currently in active development and still missing some of the features present in the Gradio Interface.* - **Currently in development: [Stable Horde](https://stablehorde.net/) integration; ImgLab, batch inputs, & mask editor from Gradio
**Prompt Weights & Negative Prompts:**
To give a token (tag recognized by the AI) a specific or increased weight (emphasis), add `:0.##` to the prompt, where `0.##` is a decimal that will specify the weight of all tokens before the colon.
Ex: `cat:0.30, dog:0.70` or `guy riding a bicycle :0.7, incoming car :0.30`
Negative prompts can be added by using `###` , after which any tokens will be seen as negative.
Ex: `cat playing with string ### yarn` will negate `yarn` from the generated image.
Negatives are a very powerful tool to get rid of contextually similar or related topics, but **be careful when adding them since the AI might see connections you can't**, and end up outputting gibberish
**Tip:* Try using the same seed with different prompt configurations or weight values see how the AI understands them, it can lead to prompts that are more well-tuned and less prone to error.
Please see the [Streamlit Documentation](docs/4.streamlit-interface.md) to learn more. Please see the [Streamlit Documentation](docs/4.streamlit-interface.md) to learn more.
## Gradio ## Gradio [Legacy]
![](images/gradio/gradio-t2i.png) ![](images/gradio/gradio-t2i.png)
**Features:** **Features:**
- Older UI design that is fully functional and feature complete. - Older UI that is functional and feature complete.
- Has access to all upscaling models, including LSDR. - Has access to all upscaling models, including LSDR.
- Dynamic prompt entry automatically changes your generation settings based on `--params` in a prompt. - Dynamic prompt entry automatically changes your generation settings based on `--params` in a prompt.
- Includes quick and easy ways to send generations to Image2Image or the Image Lab for upscaling. - Includes quick and easy ways to send generations to Image2Image or the Image Lab for upscaling.
- *Note, the Gradio interface is no longer being actively developed and is only receiving bug fixes.*
**Note: the Gradio interface is no longer being actively developed by Sygil.Dev and is only receiving bug fixes.**
Please see the [Gradio Documentation](docs/5.gradio-interface.md) to learn more. Please see the [Gradio Documentation](docs/5.gradio-interface.md) to learn more.
@ -129,7 +132,7 @@ Lets you improve faces in pictures using the GFPGAN model. There is a checkbox i
If you want to use GFPGAN to improve generated faces, you need to install it separately. If you want to use GFPGAN to improve generated faces, you need to install it separately.
Download [GFPGANv1.4.pth](https://github.com/TencentARC/GFPGAN/releases/download/v1.3.4/GFPGANv1.4.pth) and put it Download [GFPGANv1.4.pth](https://github.com/TencentARC/GFPGAN/releases/download/v1.3.4/GFPGANv1.4.pth) and put it
into the `/stable-diffusion-webui/models/gfpgan` directory. into the `/sygil-webui/models/gfpgan` directory.
### RealESRGAN ### RealESRGAN
@ -139,25 +142,21 @@ Lets you double the resolution of generated images. There is a checkbox in every
There is also a separate tab for using RealESRGAN on any picture. There is also a separate tab for using RealESRGAN on any picture.
Download [RealESRGAN_x4plus.pth](https://github.com/xinntao/Real-ESRGAN/releases/download/v0.1.0/RealESRGAN_x4plus.pth) and [RealESRGAN_x4plus_anime_6B.pth](https://github.com/xinntao/Real-ESRGAN/releases/download/v0.2.2.4/RealESRGAN_x4plus_anime_6B.pth). Download [RealESRGAN_x4plus.pth](https://github.com/xinntao/Real-ESRGAN/releases/download/v0.1.0/RealESRGAN_x4plus.pth) and [RealESRGAN_x4plus_anime_6B.pth](https://github.com/xinntao/Real-ESRGAN/releases/download/v0.2.2.4/RealESRGAN_x4plus_anime_6B.pth).
Put them into the `stable-diffusion-webui/models/realesrgan` directory. Put them into the `sygil-webui/models/realesrgan` directory.
### LSDR ### LSDR
Download **LDSR** [project.yaml](https://heibox.uni-heidelberg.de/f/31a76b13ea27482981b4/?dl=1) and [model last.cpkt](https://heibox.uni-heidelberg.de/f/578df07c8fc04ffbadf3/?dl=1). Rename last.ckpt to model.ckpt and place both under `stable-diffusion-webui/models/ldsr/` Download **LDSR** [project.yaml](https://heibox.uni-heidelberg.de/f/31a76b13ea27482981b4/?dl=1) and [model last.cpkt](https://heibox.uni-heidelberg.de/f/578df07c8fc04ffbadf3/?dl=1). Rename last.ckpt to model.ckpt and place both under `sygil-webui/models/ldsr/`
### GoBig, and GoLatent *(Currently on the Gradio version Only)* ### GoBig, and GoLatent *(Currently on the Gradio version Only)*
More powerful upscalers that uses a seperate Latent Diffusion model to more cleanly upscale images. More powerful upscalers that uses a seperate Latent Diffusion model to more cleanly upscale images.
Please see the [Image Enhancers Documentation](docs/6.image_enhancers.md) to learn more. Please see the [Image Enhancers Documentation](docs/6.image_enhancers.md) to learn more.
----- -----
### *Original Information From The Stable Diffusion Repo* ### *Original Information From The Stable Diffusion Repo:*
# Stable Diffusion # Stable Diffusion
@ -212,5 +211,4 @@ Details on the training procedure and data, as well as the intended use of the m
archivePrefix={arXiv}, archivePrefix={arXiv},
primaryClass={cs.CV} primaryClass={cs.CV}
} }
``` ```

View File

@ -0,0 +1,443 @@
{
"nbformat": 4,
"nbformat_minor": 0,
"metadata": {
"colab": {
"private_outputs": true,
"provenance": [],
"collapsed_sections": [
"5-Bx4AsEoPU-",
"xMWVQOg0G1Pj"
]
},
"kernelspec": {
"name": "python3",
"display_name": "Python 3"
},
"language_info": {
"name": "python"
},
"accelerator": "GPU"
},
"cells": [
{
"cell_type": "markdown",
"source": [
"[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/Sygil-Dev/sygil-webui/blob/dev/Web_based_UI_for_Stable_Diffusion_colab.ipynb)"
],
"metadata": {
"id": "S5RoIM-5IPZJ"
}
},
{
"cell_type": "markdown",
"source": [
"# README"
],
"metadata": {
"id": "5-Bx4AsEoPU-"
}
},
{
"cell_type": "markdown",
"source": [
"###<center>Web-based UI for Stable Diffusion</center>\n",
"\n",
"## Created by [Sygil-Dev](https://github.com/Sygil-Dev)\n",
"\n",
"## [Visit Sygil-Dev's Discord Server](https://discord.gg/gyXNe4NySY) [![Discord Server](https://user-images.githubusercontent.com/5977640/190528254-9b5b4423-47ee-4f24-b4f9-fd13fba37518.png)](https://discord.gg/gyXNe4NySY)\n",
"\n",
"## Installation instructions for:\n",
"\n",
"- **[Windows](https://sygil-dev.github.io/sygil-webui/docs/1.windows-installation.html)** \n",
"- **[Linux](https://sygil-dev.github.io/sygil-webui/docs/2.linux-installation.html)**\n",
"\n",
"### Want to ask a question or request a feature?\n",
"\n",
"Come to our [Discord Server](https://discord.gg/gyXNe4NySY) or use [Discussions](https://github.com/Sygil-Dev/sygil-webui/discussions).\n",
"\n",
"## Documentation\n",
"\n",
"[Documentation is located here](https://sygil-dev.github.io/sygil-webui/)\n",
"\n",
"## Want to contribute?\n",
"\n",
"Check the [Contribution Guide](CONTRIBUTING.md)\n",
"\n",
"[Sygil-Dev](https://github.com/Sygil-Dev) main devs:\n",
"\n",
"* ![hlky's avatar](https://avatars.githubusercontent.com/u/106811348?s=40&v=4) [hlky](https://github.com/hlky)\n",
"* ![ZeroCool940711's avatar](https://avatars.githubusercontent.com/u/5977640?s=40&v=4)[ZeroCool940711](https://github.com/ZeroCool940711)\n",
"* ![codedealer's avatar](https://avatars.githubusercontent.com/u/4258136?s=40&v=4)[codedealer](https://github.com/codedealer)\n",
"\n",
"### Project Features:\n",
"\n",
"* Two great Web UI's to choose from: Streamlit or Gradio\n",
"\n",
"* No more manually typing parameters, now all you have to do is write your prompt and adjust sliders\n",
"\n",
"* Built-in image enhancers and upscalers, including GFPGAN and realESRGAN\n",
"\n",
"* Run additional upscaling models on CPU to save VRAM\n",
"\n",
"* Textual inversion 🔥: [info](https://textual-inversion.github.io/) - requires enabling, see [here](https://github.com/hlky/sd-enable-textual-inversion), script works as usual without it enabled\n",
"\n",
"* Advanced img2img editor with Mask and crop capabilities\n",
"\n",
"* Mask painting 🖌️: Powerful tool for re-generating only specific parts of an image you want to change (currently Gradio only)\n",
"\n",
"* More diffusion samplers 🔥🔥: A great collection of samplers to use, including:\n",
" \n",
" - `k_euler` (Default)\n",
" - `k_lms`\n",
" - `k_euler_a`\n",
" - `k_dpm_2`\n",
" - `k_dpm_2_a`\n",
" - `k_heun`\n",
" - `PLMS`\n",
" - `DDIM`\n",
"\n",
"* Loopback ➿: Automatically feed the last generated sample back into img2img\n",
"\n",
"* Prompt Weighting 🏋️: Adjust the strength of different terms in your prompt\n",
"\n",
"* Selectable GPU usage with `--gpu <id>`\n",
"\n",
"* Memory Monitoring 🔥: Shows VRAM usage and generation time after outputting\n",
"\n",
"* Word Seeds 🔥: Use words instead of seed numbers\n",
"\n",
"* CFG: Classifier free guidance scale, a feature for fine-tuning your output\n",
"\n",
"* Automatic Launcher: Activate conda and run Stable Diffusion with a single command\n",
"\n",
"* Lighter on VRAM: 512x512 Text2Image & Image2Image tested working on 4GB\n",
"\n",
"* Prompt validation: If your prompt is too long, you will get a warning in the text output field\n",
"\n",
"* Copy-paste generation parameters: A text output provides generation parameters in an easy to copy-paste form for easy sharing.\n",
"\n",
"* Correct seeds for batches: If you use a seed of 1000 to generate two batches of two images each, four generated images will have seeds: `1000, 1001, 1002, 1003`.\n",
"\n",
"* Prompt matrix: Separate multiple prompts using the `|` character, and the system will produce an image for every combination of them.\n",
"\n",
"* Loopback for Image2Image: A checkbox for img2img allowing to automatically feed output image as input for the next batch. Equivalent to saving output image, and replacing input image with it.\n",
"\n",
"# Stable Diffusion Web UI\n",
"\n",
"A fully-integrated and easy way to work with Stable Diffusion right from a browser window.\n",
"\n",
"## Streamlit\n",
"\n",
"![](images/streamlit/streamlit-t2i.png)\n",
"\n",
"**Features:**\n",
"\n",
"- Clean UI with an easy to use design, with support for widescreen displays.\n",
"- Dynamic live preview of your generations\n",
"- Easily customizable presets right from the WebUI (Coming Soon!)\n",
"- An integrated gallery to show the generations for a prompt or session (Coming soon!)\n",
"- Better optimization VRAM usage optimization, less errors for bigger generations.\n",
"- Text2Video - Generate video clips from text prompts right from the WEb UI (WIP)\n",
"- Concepts Library - Run custom embeddings others have made via textual inversion.\n",
"- Actively being developed with new features being added and planned - Stay Tuned!\n",
"- Streamlit is now the new primary UI for the project moving forward.\n",
"- *Currently in active development and still missing some of the features present in the Gradio Interface.*\n",
"\n",
"Please see the [Streamlit Documentation](docs/4.streamlit-interface.md) to learn more.\n",
"\n",
"## Gradio\n",
"\n",
"![](images/gradio/gradio-t2i.png)\n",
"\n",
"**Features:**\n",
"\n",
"- Older UI design that is fully functional and feature complete.\n",
"- Has access to all upscaling models, including LSDR.\n",
"- Dynamic prompt entry automatically changes your generation settings based on `--params` in a prompt.\n",
"- Includes quick and easy ways to send generations to Image2Image or the Image Lab for upscaling.\n",
"- *Note, the Gradio interface is no longer being actively developed and is only receiving bug fixes.*\n",
"\n",
"Please see the [Gradio Documentation](docs/5.gradio-interface.md) to learn more.\n",
"\n",
"## Image Upscalers\n",
"\n",
"---\n",
"\n",
"### GFPGAN\n",
"\n",
"![](images/GFPGAN.png)\n",
"\n",
"Lets you improve faces in pictures using the GFPGAN model. There is a checkbox in every tab to use GFPGAN at 100%, and also a separate tab that just allows you to use GFPGAN on any picture, with a slider that controls how strong the effect is.\n",
"\n",
"If you want to use GFPGAN to improve generated faces, you need to install it separately.\n",
"Download [GFPGANv1.4.pth](https://github.com/TencentARC/GFPGAN/releases/download/v1.3.4/GFPGANv1.4.pth) and put it\n",
"into the `/sygil-webui/models/gfpgan` directory. \n",
"\n",
"### RealESRGAN\n",
"\n",
"![](images/RealESRGAN.png)\n",
"\n",
"Lets you double the resolution of generated images. There is a checkbox in every tab to use RealESRGAN, and you can choose between the regular upscaler and the anime version.\n",
"There is also a separate tab for using RealESRGAN on any picture.\n",
"\n",
"Download [RealESRGAN_x4plus.pth](https://github.com/xinntao/Real-ESRGAN/releases/download/v0.1.0/RealESRGAN_x4plus.pth) and [RealESRGAN_x4plus_anime_6B.pth](https://github.com/xinntao/Real-ESRGAN/releases/download/v0.2.2.4/RealESRGAN_x4plus_anime_6B.pth).\n",
"Put them into the `sygil-webui/models/realesrgan` directory. \n",
"\n",
"\n",
"\n",
"### LSDR\n",
"\n",
"Download **LDSR** [project.yaml](https://heibox.uni-heidelberg.de/f/31a76b13ea27482981b4/?dl=1) and [model last.cpkt](https://heibox.uni-heidelberg.de/f/578df07c8fc04ffbadf3/?dl=1). Rename last.ckpt to model.ckpt and place both under `sygil-webui/models/ldsr/`\n",
"\n",
"### GoBig, and GoLatent *(Currently on the Gradio version Only)*\n",
"\n",
"More powerful upscalers that uses a seperate Latent Diffusion model to more cleanly upscale images.\n",
"\n",
"\n",
"\n",
"Please see the [Image Enhancers Documentation](docs/6.image_enhancers.md) to learn more.\n",
"\n",
"-----\n",
"\n",
"### *Original Information From The Stable Diffusion Repo*\n",
"\n",
"# Stable Diffusion\n",
"\n",
"*Stable Diffusion was made possible thanks to a collaboration with [Stability AI](https://stability.ai/) and [Runway](https://runwayml.com/) and builds upon our previous work:*\n",
"\n",
"[**High-Resolution Image Synthesis with Latent Diffusion Models**](https://ommer-lab.com/research/latent-diffusion-models/)<br/>\n",
"[Robin Rombach](https://github.com/rromb)\\*,\n",
"[Andreas Blattmann](https://github.com/ablattmann)\\*,\n",
"[Dominik Lorenz](https://github.com/qp-qp)\\,\n",
"[Patrick Esser](https://github.com/pesser),\n",
"[Björn Ommer](https://hci.iwr.uni-heidelberg.de/Staff/bommer)<br/>\n",
"\n",
"**CVPR '22 Oral**\n",
"\n",
"which is available on [GitHub](https://github.com/CompVis/latent-diffusion). PDF at [arXiv](https://arxiv.org/abs/2112.10752). Please also visit our [Project page](https://ommer-lab.com/research/latent-diffusion-models/).\n",
"\n",
"[Stable Diffusion](#stable-diffusion-v1) is a latent text-to-image diffusion\n",
"model.\n",
"Thanks to a generous compute donation from [Stability AI](https://stability.ai/) and support from [LAION](https://laion.ai/), we were able to train a Latent Diffusion Model on 512x512 images from a subset of the [LAION-5B](https://laion.ai/blog/laion-5b/) database. \n",
"Similar to Google's [Imagen](https://arxiv.org/abs/2205.11487), \n",
"this model uses a frozen CLIP ViT-L/14 text encoder to condition the model on text prompts.\n",
"With its 860M UNet and 123M text encoder, the model is relatively lightweight and runs on a GPU with at least 10GB VRAM.\n",
"See [this section](#stable-diffusion-v1) below and the [model card](https://huggingface.co/CompVis/stable-diffusion).\n",
"\n",
"## Stable Diffusion v1\n",
"\n",
"Stable Diffusion v1 refers to a specific configuration of the model\n",
"architecture that uses a downsampling-factor 8 autoencoder with an 860M UNet\n",
"and CLIP ViT-L/14 text encoder for the diffusion model. The model was pretrained on 256x256 images and \n",
"then finetuned on 512x512 images.\n",
"\n",
"*Note: Stable Diffusion v1 is a general text-to-image diffusion model and therefore mirrors biases and (mis-)conceptions that are present\n",
"in its training data. \n",
"Details on the training procedure and data, as well as the intended use of the model can be found in the corresponding [model card](https://huggingface.co/CompVis/stable-diffusion).\n",
"\n",
"## Comments\n",
"\n",
"- Our codebase for the diffusion models builds heavily on [OpenAI's ADM codebase](https://github.com/openai/guided-diffusion)\n",
" and [https://github.com/lucidrains/denoising-diffusion-pytorch](https://github.com/lucidrains/denoising-diffusion-pytorch). \n",
" Thanks for open-sourcing!\n",
"\n",
"- The implementation of the transformer encoder is from [x-transformers](https://github.com/lucidrains/x-transformers) by [lucidrains](https://github.com/lucidrains?tab=repositories). \n",
"\n",
"## BibTeX\n",
"\n",
"```\n",
"@misc{rombach2021highresolution,\n",
" title={High-Resolution Image Synthesis with Latent Diffusion Models}, \n",
" author={Robin Rombach and Andreas Blattmann and Dominik Lorenz and Patrick Esser and Björn Ommer},\n",
" year={2021},\n",
" eprint={2112.10752},\n",
" archivePrefix={arXiv},\n",
" primaryClass={cs.CV}\n",
"}\n",
"\n",
"```"
],
"metadata": {
"id": "z4kQYMPQn4d-"
}
},
{
"cell_type": "markdown",
"source": [
"# Setup"
],
"metadata": {
"id": "IZjJSr-WPNxB"
}
},
{
"cell_type": "code",
"metadata": {
"id": "eq0-E5mjSpmP"
},
"source": [
"!nvidia-smi -L"
],
"execution_count": null,
"outputs": []
},
{
"cell_type": "code",
"source": [
"!pip install condacolab\n",
"import condacolab\n",
"condacolab.install_from_url(\"https://github.com/conda-forge/miniforge/releases/download/4.14.0-0/Mambaforge-4.14.0-0-Linux-x86_64.sh\")\n",
"\n",
"import condacolab\n",
"condacolab.check()\n",
"\n",
"# The runtime will crash after this, its normal as we are forcing a restart of the runtime from code. Just hit \"Run All\" again."
],
"metadata": {
"id": "cDu33xkdJ5mD"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "code",
"source": [
"!git clone https://github.com/Sygil-Dev/sygil-webui.git\n",
"%cd /content/sygil-webui/\n",
"!git checkout dev\n",
"!git pull\n",
"!wget -O arial.ttf https://github.com/matomo-org/travis-scripts/blob/master/fonts/Arial.ttf?raw=true"
],
"metadata": {
"id": "pZHGf03Vp305"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "code",
"source": [
"!mamba install cudatoolkit=11.3 git numpy=1.22.3 pip=20.3 python=3.8.5 pytorch=1.11.0 scikit-image=0.19.2 torchvision=0.12.0 -y"
],
"metadata": {
"id": "dmN2igp5Yk3z"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "code",
"source": [
"#@title Install dependencies.\n",
"!python --version\n",
"!pip install -r requirements.txt"
],
"metadata": {
"id": "vXX0OaR8KyLQ"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "code",
"source": [
"!npm install localtunnel"
],
"metadata": {
"id": "FHyVuT5aSM2G"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"source": [
"#Launch the WebUI"
],
"metadata": {
"id": "csi6cj6gQZmC"
}
},
{
"cell_type": "code",
"source": [
"#@title Mount Google Drive\n",
"import os\n",
"mount_google_drive = True #@param {type:\"boolean\"}\n",
"save_outputs_to_drive = True #@param {type:\"boolean\"}\n",
"\n",
"if mount_google_drive:\n",
" # Mount google drive to store your outputs.\n",
" from google.colab import drive\n",
" drive.mount('/content/drive/', force_remount=True)\n",
"\n",
"if save_outputs_to_drive:\n",
" os.makedirs(\"/content/drive/MyDrive/sygil-webui/outputs\", exist_ok=True)\n",
" os.symlink(\"/content/drive/MyDrive/sygil-webui/outputs\", \"/content/sygil-webui/outputs\", target_is_directory=True)\n"
],
"metadata": {
"id": "pcSWo9Zkzbsf"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "code",
"source": [
"#@title Enter Huggingface token\n",
"!git config --global credential.helper store\n",
"!huggingface-cli login"
],
"metadata": {
"id": "IsbG7fvIrKwg"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "code",
"source": [
"#@title <-- Press play on the music player to keep the tab alive (Uses only 13MB of data)\n",
"%%html\n",
"<b>Press play on the music player to keep the tab alive, then start your generation below (Uses only 13MB of data)</b><br/>\n",
"<audio src=\"https://henk.tech/colabkobold/silence.m4a\" controls>"
],
"metadata": {
"id": "-WknaU7uu_q6"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"source": [
"JS to prevent idle timeout:\n",
"\n",
"Press F12 OR CTRL + SHIFT + I OR right click on this website -> inspect. Then click on the console tab and paste in the following code.\n",
"\n",
"function ClickConnect(){\n",
"console.log(\"Working\");\n",
"document.querySelector(\"colab-toolbar-button#connect\").click()\n",
"}\n",
"setInterval(ClickConnect,60000)"
],
"metadata": {
"id": "pjIjiCuJysJI"
}
},
{
"cell_type": "code",
"source": [
"#@title Open port 8501 and start Streamlit server. Open link in 'link.txt' file in file pane on left.\n",
"!npx localtunnel --port 8501 &>/content/link.txt &\n",
"!streamlit run scripts/webui_streamlit.py --theme.base dark --server.headless true 2>&1 | tee -a /content/log.txt"
],
"metadata": {
"id": "5whXm2nfSZ39"
},
"execution_count": null,
"outputs": []
}
]
}

View File

@ -1,6 +1,6 @@
# This file is part of stable-diffusion-webui (https://github.com/sd-webui/stable-diffusion-webui/). # This file is part of sygil-webui (https://github.com/Sygil-Dev/sygil-webui/).
# Copyright 2022 sd-webui team. # Copyright 2022 Sygil-Dev team.
# This program is free software: you can redistribute it and/or modify # This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as published by # it under the terms of the GNU Affero General Public License as published by
# the Free Software Foundation, either version 3 of the License, or # the Free Software Foundation, either version 3 of the License, or

View File

@ -1,6 +1,6 @@
# This file is part of stable-diffusion-webui (https://github.com/sd-webui/stable-diffusion-webui/). # This file is part of sygil-webui (https://github.com/Sygil-Dev/sygil-webui/).
# Copyright 2022 sd-webui team. # Copyright 2022 Sygil-Dev team.
# This program is free software: you can redistribute it and/or modify # This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as published by # it under the terms of the GNU Affero General Public License as published by
# the Free Software Foundation, either version 3 of the License, or # the Free Software Foundation, either version 3 of the License, or
@ -19,15 +19,15 @@
# You may add overrides in a file named "userconfig_streamlit.yaml" in this folder, which can contain any subset # You may add overrides in a file named "userconfig_streamlit.yaml" in this folder, which can contain any subset
# of the properties below. # of the properties below.
general: general:
version: 1.17.2 version: 1.24.6
streamlit_telemetry: False streamlit_telemetry: False
default_theme: dark default_theme: dark
huggingface_token: '' huggingface_token: ''
gpu: 0 gpu: 0
outdir: outputs outdir: outputs
default_model: "Stable Diffusion v1.4" default_model: "Stable Diffusion v1.5"
default_model_config: "configs/stable-diffusion/v1-inference.yaml" default_model_config: "configs/stable-diffusion/v1-inference.yaml"
default_model_path: "models/ldm/stable-diffusion-v1/model.ckpt" default_model_path: "models/ldm/stable-diffusion-v1/Stable Diffusion v1.5.ckpt"
use_sd_concepts_library: True use_sd_concepts_library: True
sd_concepts_library_folder: "models/custom/sd-concepts-library" sd_concepts_library_folder: "models/custom/sd-concepts-library"
GFPGAN_dir: "./models/gfpgan" GFPGAN_dir: "./models/gfpgan"
@ -51,7 +51,7 @@ general:
save_format: "png" save_format: "png"
skip_grid: False skip_grid: False
skip_save: False skip_save: False
grid_format: "jpg:95" grid_quality: 95
n_rows: -1 n_rows: -1
no_verify_input: False no_verify_input: False
no_half: False no_half: False
@ -131,8 +131,8 @@ txt2img:
write_info_files: True write_info_files: True
txt2vid: txt2vid:
default_model: "CompVis/stable-diffusion-v1-4" default_model: "runwayml/stable-diffusion-v1-5"
custom_models_list: ["CompVis/stable-diffusion-v1-4"] custom_models_list: ["runwayml/stable-diffusion-v1-5", "CompVis/stable-diffusion-v1-4", "hakurei/waifu-diffusion"]
prompt: prompt:
width: width:
value: 512 value: 512
@ -212,7 +212,7 @@ txt2vid:
format: "%.5f" format: "%.5f"
beta_scheduler_type: "scaled_linear" beta_scheduler_type: "scaled_linear"
max_frames: 100 max_duration_in_seconds: 30
LDSR_config: LDSR_config:
sampling_steps: 50 sampling_steps: 50
@ -230,7 +230,8 @@ img2img:
step: 0.01 step: 0.01
# 0: Keep masked area # 0: Keep masked area
# 1: Regenerate only masked area # 1: Regenerate only masked area
mask_mode: 0 mask_mode: 1
noise_mode: "Matched Noise"
mask_restore: False mask_restore: False
# 0: Just resize # 0: Just resize
# 1: Crop and resize # 1: Crop and resize
@ -304,7 +305,7 @@ img2img:
write_info_files: True write_info_files: True
img2txt: img2txt:
batch_size: 420 batch_size: 2000
blip_image_eval_size: 512 blip_image_eval_size: 512
keep_all_models_loaded: False keep_all_models_loaded: False
@ -325,12 +326,12 @@ daisi_app:
model_manager: model_manager:
models: models:
stable_diffusion: stable_diffusion:
model_name: "Stable Diffusion v1.4" model_name: "Stable Diffusion v1.5"
save_location: "./models/ldm/stable-diffusion-v1" save_location: "./models/ldm/stable-diffusion-v1"
files: files:
model_ckpt: model_ckpt:
file_name: "model.ckpt" file_name: "Stable Diffusion v1.5.ckpt"
download_link: "https://www.googleapis.com/storage/v1/b/aai-blog-files/o/sd-v1-4.ckpt?alt=media" download_link: "https://huggingface.co/runwayml/stable-diffusion-v1-5/resolve/main/v1-5-pruned-emaonly.ckpt"
gfpgan: gfpgan:
model_name: "GFPGAN" model_name: "GFPGAN"
@ -362,12 +363,12 @@ model_manager:
waifu_diffusion: waifu_diffusion:
model_name: "Waifu Diffusion v1.2" model_name: "Waifu Diffusion v1.3"
save_location: "./models/custom" save_location: "./models/custom"
files: files:
waifu_diffusion: waifu_diffusion:
file_name: "waifu-diffusion.ckpt" file_name: "Waifu-Diffusion-v1-3 Full ema.ckpt"
download_link: "https://huggingface.co/crumb/pruned-waifu-diffusion/resolve/main/model-pruned.ckpt" download_link: "https://huggingface.co/hakurei/waifu-diffusion-v1-3/resolve/main/wd-v1-3-full.ckpt"
trinart_stable_diffusion: trinart_stable_diffusion:
@ -378,13 +379,21 @@ model_manager:
file_name: "trinart.ckpt" file_name: "trinart.ckpt"
download_link: "https://huggingface.co/naclbit/trinart_stable_diffusion_v2/resolve/main/trinart2_step95000.ckpt" download_link: "https://huggingface.co/naclbit/trinart_stable_diffusion_v2/resolve/main/trinart2_step95000.ckpt"
sd_wd_ld_trinart_merged:
model_name: "SD1.5-WD1.3-LD-Trinart-Merged"
save_location: "./models/custom"
files:
sd_wd_ld_trinart_merged:
file_name: "SD1.5-WD1.3-LD-Trinart-Merged.ckpt"
download_link: "https://huggingface.co/ZeroCool94/sd1.5-wd1.3-ld-trinart-merged/resolve/main/SD1.5-WD1.3-LD-Trinart-Merged.ckpt"
stable_diffusion_concept_library: stable_diffusion_concept_library:
model_name: "Stable Diffusion Concept Library" model_name: "Stable Diffusion Concept Library"
save_location: "./models/custom/sd-concepts-library/" save_location: "./models/custom/sd-concepts-library/"
files: files:
concept_library: concept_library:
file_name: "" file_name: ""
download_link: "https://github.com/sd-webui/sd-concepts-library" download_link: "https://github.com/Sygil-Dev/sd-concepts-library"
blip_model: blip_model:
model_name: "Blip Model" model_name: "Blip Model"

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

160
data/img2txt/subreddits.txt Normal file
View File

@ -0,0 +1,160 @@
/r/ImaginaryAetherpunk
/r/ImaginaryAgriculture
/r/ImaginaryAirships
/r/ImaginaryAliens
/r/ImaginaryAngels
/r/ImaginaryAnimals
/r/ImaginaryArchers
/r/ImaginaryArchitecture
/r/ImaginaryArmor
/r/ImaginaryArtisans
/r/ImaginaryAssassins
/r/ImaginaryAstronauts
/r/ImaginaryAsylums
/r/ImaginaryAutumnscapes
/r/ImaginaryAviation
/r/ImaginaryAzeroth
/r/ImaginaryBattlefields
/r/ImaginaryBeasts
/r/ImaginaryBehemoths
/r/ImaginaryBodyscapes
/r/ImaginaryBooks
/r/ImaginaryCanyons
/r/ImaginaryCarnage
/r/ImaginaryCastles
/r/ImaginaryCaves
/r/ImaginaryCentaurs
/r/ImaginaryCharacters
/r/ImaginaryCityscapes
/r/ImaginaryClerics
/r/ImaginaryCowboys
/r/ImaginaryCrawlers
/r/ImaginaryCultists
/r/ImaginaryCybernetics
/r/ImaginaryCyberpunk
/r/ImaginaryDarkSouls
/r/ImaginaryDemons
/r/ImaginaryDerelicts
/r/ImaginaryDeserts
/r/ImaginaryDieselpunk
/r/ImaginaryDinosaurs
/r/ImaginaryDragons
/r/ImaginaryDruids
/r/ImaginaryDwarves
/r/ImaginaryDwellings
/r/ImaginaryElementals
/r/ImaginaryElves
/r/ImaginaryExplosions
/r/ImaginaryFactories
/r/ImaginaryFaeries
/r/ImaginaryFallout
/r/ImaginaryFamilies
/r/ImaginaryFashion
/r/ImaginaryFood
/r/ImaginaryForests
/r/ImaginaryFutureWar
/r/ImaginaryFuturism
/r/ImaginaryGardens
/r/ImaginaryGatherings
/r/ImaginaryGiants
/r/ImaginaryGlaciers
/r/ImaginaryGnomes
/r/ImaginaryGoblins
/r/ImaginaryHellscapes
/r/ImaginaryHistory
/r/ImaginaryHorrors
/r/ImaginaryHumans
/r/ImaginaryHybrids
/r/ImaginaryIcons
/r/ImaginaryImmortals
/r/ImaginaryInteriors
/r/ImaginaryIslands
/r/ImaginaryJedi
/r/ImaginaryKanto
/r/ImaginaryKnights
/r/ImaginaryLakes
/r/ImaginaryLandscapes
/r/ImaginaryLesbians
/r/ImaginaryLeviathans
/r/ImaginaryLovers
/r/ImaginaryMarvel
/r/ImaginaryMeIRL
/r/ImaginaryMechs
/r/ImaginaryMen
/r/ImaginaryMerchants
/r/ImaginaryMerfolk
/r/ImaginaryMiddleEarth
/r/ImaginaryMindscapes
/r/ImaginaryMonsterBoys
/r/ImaginaryMonsterGirls
/r/ImaginaryMonsters
/r/ImaginaryMonuments
/r/ImaginaryMountains
/r/ImaginaryMovies
/r/ImaginaryMythology
/r/ImaginaryNatives
/r/ImaginaryNecronomicon
/r/ImaginaryNightscapes
/r/ImaginaryNinjas
/r/ImaginaryNobles
/r/ImaginaryNomads
/r/ImaginaryOrcs
/r/ImaginaryPathways
/r/ImaginaryPirates
/r/ImaginaryPolice
/r/ImaginaryPolitics
/r/ImaginaryPortals
/r/ImaginaryPrisons
/r/ImaginaryPropaganda
/r/ImaginaryRivers
/r/ImaginaryRobotics
/r/ImaginaryRuins
/r/ImaginaryScholars
/r/ImaginaryScience
/r/ImaginarySeascapes
/r/ImaginarySkyscapes
/r/ImaginarySlavery
/r/ImaginarySoldiers
/r/ImaginarySpirits
/r/ImaginarySports
/r/ImaginarySpringscapes
/r/ImaginaryStarscapes
/r/ImaginaryStarships
/r/ImaginaryStatues
/r/ImaginarySteampunk
/r/ImaginarySummerscapes
/r/ImaginarySwamps
/r/ImaginaryTamriel
/r/ImaginaryTaverns
/r/ImaginaryTechnology
/r/ImaginaryTemples
/r/ImaginaryTowers
/r/ImaginaryTrees
/r/ImaginaryTrolls
/r/ImaginaryUndead
/r/ImaginaryUnicorns
/r/ImaginaryVampires
/r/ImaginaryVehicles
/r/ImaginaryVessels
/r/ImaginaryVikings
/r/ImaginaryVillages
/r/ImaginaryVolcanoes
/r/ImaginaryWTF
/r/ImaginaryWalls
/r/ImaginaryWarhammer
/r/ImaginaryWarriors
/r/ImaginaryWarships
/r/ImaginaryWastelands
/r/ImaginaryWaterfalls
/r/ImaginaryWaterscapes
/r/ImaginaryWeaponry
/r/ImaginaryWeather
/r/ImaginaryWerewolves
/r/ImaginaryWesteros
/r/ImaginaryWildlands
/r/ImaginaryWinterscapes
/r/ImaginaryWitcher
/r/ImaginaryWitches
/r/ImaginaryWizards
/r/ImaginaryWorldEaters
/r/ImaginaryWorlds

1936
data/img2txt/tags.txt Normal file

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,63 @@
Fine Art
Diagrammatic
Geometric
Architectural
Analytic
3D
Anamorphic
Pencil
Color Pencil
Charcoal
Graphite
Chalk
Pen
Ink
Crayon
Pastel
Sand
Beach Art
Rangoli
Mehndi
Flower
Food Art
Tattoo
Digital
Pixel
Embroidery
Line
Pointillism
Single Color
Stippling
Contour
Hatching
Scumbling
Scribble
Geometric Portait
Triangulation
Caricature
Photorealism
Photo realistic
Doodling
Wordtoons
Cartoon
Anime
Manga
Graffiti
Typography
Calligraphy
Mosaic
Figurative
Anatomy
Life
Still life
Portrait
Landscape
Perspective
Funny
Surreal
Wall Mural
Street
Realistic
Photo Realistic
Hyper Realistic
Doodle

36704
data/tags/key_phrases.json Normal file

File diff suppressed because it is too large Load Diff

File diff suppressed because one or more lines are too long

467
db.json Normal file
View File

@ -0,0 +1,467 @@
{
"stable_diffusion": {
"name": "stable_diffusion",
"type": "ckpt",
"description": "Generalist AI image generating model. The baseline for all finetuned models.",
"version": "1.5",
"style": "generalist",
"nsfw": false,
"download_all": true,
"requires": [
"clip-vit-large-patch14"
],
"config": {
"files": [
{
"path": "models/ldm/stable-diffusion-v1/model_1_5.ckpt"
},
{
"path": "configs/stable-diffusion/v1-inference.yaml"
}
],
"download": [
{
"file_name": "model_1_5.ckpt",
"file_path": "models/ldm/stable-diffusion-v1",
"file_url": "https://{username}:{password}@huggingface.co/runwayml/stable-diffusion-v1-5/resolve/main/v1-5-pruned-emaonly.ckpt",
"hf_auth": true
}
]
},
"available": false
},
"stable_diffusion_1.4": {
"name": "stable_diffusion",
"type": "ckpt",
"description": "Generalist AI image generating model. The baseline for all finetuned models.",
"version": "1.4",
"style": "generalist",
"nsfw": false,
"download_all": true,
"requires": [
"clip-vit-large-patch14"
],
"config": {
"files": [
{
"path": "models/ldm/stable-diffusion-v1/model.ckpt",
"md5sum": "c01059060130b8242849d86e97212c84"
},
{
"path": "configs/stable-diffusion/v1-inference.yaml"
}
],
"download": [
{
"file_name": "model.ckpt",
"file_path": "models/ldm/stable-diffusion-v1",
"file_url": "https://www.googleapis.com/storage/v1/b/aai-blog-files/o/sd-v1-4.ckpt?alt=media"
}
],
"alt_download": [
{
"file_name": "model.ckpt",
"file_path": "models/ldm/stable-diffusion-v1",
"file_url": "https://huggingface.co/CompVis/stable-diffusion-v-1-4-original/resolve/main/sd-v1-4.ckpt",
"hf_auth": true
}
]
},
"available": false
},
"waifu_diffusion": {
"name": "waifu_diffusion",
"type": "ckpt",
"description": "Anime styled generations.",
"version": "1.3",
"style": "anime",
"nsfw": false,
"download_all": true,
"requires": [
"clip-vit-large-patch14"
],
"config": {
"files": [
{
"path": "models/custom/waifu-diffusion.ckpt",
"md5sum": "a2aa170e3f513b32a3fd8841656e0123"
},
{
"path": "configs/stable-diffusion/v1-inference.yaml"
}
],
"download": [
{
"file_name": "waifu-diffusion.ckpt",
"file_path": "models/custom",
"file_url": "https://huggingface.co/hakurei/waifu-diffusion-v1-3/resolve/main/wd-v1-3-full.ckpt"
}
]
},
"available": false
},
"Furry Epoch": {
"name": "Furry Epoch",
"type": "ckpt",
"description": "Furry styled generations.",
"version": "4",
"style": "furry",
"nsfw": false,
"download_all": false,
"requires": [
"clip-vit-large-patch14"
],
"config": {
"files": [
{
"path": "models/custom/furry-diffusion.ckpt",
"md5sum": "f8ef45a295ef4966682f6e8fc2c6830d"
},
{
"path": "configs/stable-diffusion/v1-inference.yaml"
}
],
"download": [
{
"file_name": "furry-diffusion.ckpt",
"file_path": "models/custom",
"file_url": "https://sexy.canine.wf/file/furry-ckpt/furry_epoch4.ckpt"
}
]
},
"available": false
},
"Yiffy": {
"name": "Yiffy",
"type": "ckpt",
"description": "Furry styled generations.",
"version": "18",
"style": "furry",
"nsfw": false,
"download_all": true,
"requires": [
"clip-vit-large-patch14"
],
"config": {
"files": [
{
"path": "models/custom/yiffy.ckpt",
"md5sum": "dbe25794e24af183565dc45e9ec99713"
},
{
"path": "configs/stable-diffusion/v1-inference.yaml"
}
],
"download": [
{
"file_name": "yiffy.ckpt",
"file_path": "models/custom",
"file_url": "https://sexy.canine.wf/file/yiffy-ckpt/yiffy-e18.ckpt"
}
]
},
"available": false
},
"Zack3D": {
"name": "Zack3D",
"type": "ckpt",
"description": "Kink/NSFW oriented furry styled generations.",
"version": "1",
"style": "furry",
"nsfw": true,
"download_all": true,
"requires": [
"clip-vit-large-patch14"
],
"config": {
"files": [
{
"path": "models/custom/Zack3D.ckpt",
"md5sum": "aa944b1ecdaac60113027a0fdcda4f1b"
},
{
"path": "configs/stable-diffusion/v1-inference.yaml"
}
],
"download": [
{
"file_name": "Zack3D.ckpt",
"file_path": "models/custom",
"file_url": "https://sexy.canine.wf/file/furry-ckpt/Zack3D_Kinky-v1.ckpt"
}
]
},
"available": false
},
"trinart": {
"name": "trinart",
"type": "ckpt",
"description": "Manga styled generations.",
"version": "1",
"style": "anime",
"nsfw": false,
"download_all": true,
"requires": [
"clip-vit-large-patch14"
],
"config": {
"files": [
{
"path": "models/custom/trinart.ckpt"
},
{
"path": "configs/stable-diffusion/v1-inference.yaml"
}
],
"download": [
{
"file_name": "trinart.ckpt",
"file_path": "models/custom",
"file_url": "https://huggingface.co/naclbit/trinart_stable_diffusion_v2/resolve/main/trinart2_step95000.ckpt"
}
]
},
"available": false
},
"RealESRGAN_x4plus": {
"name": "RealESRGAN_x4plus",
"type": "realesrgan",
"description": "Upscaler.",
"version": "0.1.0",
"style": "generalist",
"config": {
"files": [
{
"path": "models/realesrgan/RealESRGAN_x4plus.pth"
}
],
"download": [
{
"file_name": "RealESRGAN_x4plus.pth",
"file_path": "models/realesrgan",
"file_url": "https://github.com/xinntao/Real-ESRGAN/releases/download/v0.1.0/RealESRGAN_x4plus.pth"
}
]
},
"available": false
},
"RealESRGAN_x4plus_anime_6B": {
"name": "RealESRGAN_x4plus_anime_6B",
"type": "realesrgan",
"description": "Anime focused upscaler.",
"version": "0.2.2.4",
"style": "anime",
"config": {
"files": [
{
"path": "models/realesrgan/RealESRGAN_x4plus_anime_6B.pth"
}
],
"download": [
{
"file_name": "RealESRGAN_x4plus_anime_6B.pth",
"file_path": "models/realesrgan",
"file_url": "https://github.com/xinntao/Real-ESRGAN/releases/download/v0.2.2.4/RealESRGAN_x4plus_anime_6B.pth"
}
]
},
"available": false
},
"GFPGAN": {
"name": "GFPGAN",
"type": "gfpgan",
"description": "Face correction.",
"version": "1.4",
"style": "generalist",
"config": {
"files": [
{
"path": "models/gfpgan/GFPGANv1.4.pth"
},
{
"path": "gfpgan/weights/detection_Resnet50_Final.pth"
},
{
"path": "gfpgan/weights/parsing_parsenet.pth"
}
],
"download": [
{
"file_name": "GFPGANv1.4.pth",
"file_path": "models/gfpgan",
"file_url": "https://github.com/TencentARC/GFPGAN/releases/download/v1.3.4/GFPGANv1.4.pth"
},
{
"file_name": "detection_Resnet50_Final.pth",
"file_path": "./gfpgan/weights",
"file_url": "https://github.com/xinntao/facexlib/releases/download/v0.1.0/detection_Resnet50_Final.pth"
},
{
"file_name": "parsing_parsenet.pth",
"file_path": "./gfpgan/weights",
"file_url": "https://github.com/xinntao/facexlib/releases/download/v0.2.2/parsing_parsenet.pth"
}
]
},
"available": false
},
"LDSR": {
"name": "LDSR",
"type": "ckpt",
"description": "Upscaler.",
"version": "1",
"style": "generalist",
"nsfw": false,
"download_all": true,
"config": {
"files": [
{
"path": "models/ldsr/model.ckpt"
},
{
"path": "models/ldsr/project.yaml"
}
],
"download": [
{
"file_name": "model.ckpt",
"file_path": "models/ldsr",
"file_url": "https://heibox.uni-heidelberg.de/f/578df07c8fc04ffbadf3/?dl=1"
},
{
"file_name": "project.yaml",
"file_path": "models/ldsr",
"file_url": "https://heibox.uni-heidelberg.de/f/31a76b13ea27482981b4/?dl=1"
}
]
},
"available": false
},
"BLIP": {
"name": "BLIP",
"type": "blip",
"config": {
"files": [
{
"path": "models/blip/model__base_caption.pth"
}
],
"download": [
{
"file_name": "model__base_caption.pth",
"file_path": "models/blip",
"file_url": "https://storage.googleapis.com/sfr-vision-language-research/BLIP/models/model*_base_caption.pth"
}
]
},
"available": false
},
"ViT-L/14": {
"name": "ViT-L/14",
"type": "clip",
"config": {
"files": [
{
"path": "models/clip/ViT-L-14.pt"
}
],
"download": [
{
"file_name": "ViT-L-14.pt",
"file_path": "./models/clip",
"file_url": "https://openaipublic.azureedge.net/clip/models/b8cca3fd41ae0c99ba7e8951adf17d267cdb84cd88be6f7c2e0eca1737a03836/ViT-L-14.pt"
}
]
},
"available": false
},
"ViT-g-14": {
"name": "ViT-g-14",
"pretrained_name": "laion2b_s12b_b42k",
"type": "open_clip",
"config": {
"files": [
{
"path": "models/clip/models--laion--CLIP-ViT-g-14-laion2B-s12B-b42K/"
}
],
"download": [
{
"file_name": "main",
"file_path": "./models/clip/models--laion--CLIP-ViT-g-14-laion2B-s12B-b42K/refs",
"file_content": "b36bdd32483debcf4ed2f918bdae1d4a46ee44b8"
},
{
"file_name": "6aac683f899159946bc4ca15228bb7016f3cbb1a2c51f365cba0b23923f344da",
"file_path": "./models/clip/models--laion--CLIP-ViT-g-14-laion2B-s12B-b42K/blobs",
"file_url": "https://huggingface.co/laion/CLIP-ViT-g-14-laion2B-s12B-b42K/resolve/main/open_clip_pytorch_model.bin"
},
{
"file_name": "open_clip_pytorch_model.bin",
"file_path": "./models/clip/models--laion--CLIP-ViT-g-14-laion2B-s12B-b42K/snapshots/b36bdd32483debcf4ed2f918bdae1d4a46ee44b8",
"symlink": "./models/clip/models--laion--CLIP-ViT-g-14-laion2B-s12B-b42K/blobs/6aac683f899159946bc4ca15228bb7016f3cbb1a2c51f365cba0b23923f344da"
}
]
},
"available": false
},
"ViT-H-14": {
"name": "ViT-H-14",
"pretrained_name": "laion2b_s32b_b79k",
"type": "open_clip",
"config": {
"files": [
{
"path": "models/clip/models--laion--CLIP-ViT-H-14-laion2B-s32B-b79K/"
}
],
"download": [
{
"file_name": "main",
"file_path": "./models/clip/models--laion--CLIP-ViT-H-14-laion2B-s32B-b79K/refs",
"file_content": "58a1e03a7acfacbe6b95ebc24ae0394eda6a14fc"
},
{
"file_name": "9a78ef8e8c73fd0df621682e7a8e8eb36c6916cb3c16b291a082ecd52ab79cc4",
"file_path": "./models/clip/models--laion--CLIP-ViT-H-14-laion2B-s32B-b79K/blobs",
"file_url": "https://huggingface.co/laion/CLIP-ViT-H-14-laion2B-s32B-b79K/resolve/main/open_clip_pytorch_model.bin"
},
{
"file_name": "open_clip_pytorch_model.bin",
"file_path": "./models/clip/models--laion--CLIP-ViT-H-14-laion2B-s32B-b79K/snapshots/58a1e03a7acfacbe6b95ebc24ae0394eda6a14fc",
"symlink": "./models/clip/models--laion--CLIP-ViT-H-14-laion2B-s32B-b79K/blobs/9a78ef8e8c73fd0df621682e7a8e8eb36c6916cb3c16b291a082ecd52ab79cc4"
}
]
},
"available": false
},
"diffusers_stable_diffusion": {
"name": "diffusers_stable_diffusion",
"type": "diffusers",
"requires": [
"clip-vit-large-patch14"
],
"config": {
"files": [
{
"path": "models/diffusers/"
}
],
"download": [
{
"file_name": "diffusers_stable_diffusion",
"file_url": "https://{username}:{password}@huggingface.co/CompVis/stable-diffusion-v1-4.git",
"git": true,
"hf_auth": true,
"post_process": [
{
"delete": "models/diffusers/stable-diffusion-v1-4/.git"
}
]
}
]
},
"available": false
}
}

98
db_dep.json Normal file
View File

@ -0,0 +1,98 @@
{
"sd-concepts-library": {
"type": "dependency",
"optional": true,
"config": {
"files": [
{
"path": "models/custom/sd-concepts-library/"
}
],
"download": [
{
"file_name": "sd-concepts-library",
"file_path": "./models/custom/sd-concepts-library/",
"file_url": "https://github.com/sd-webui/sd-concepts-library/archive/refs/heads/main.zip",
"unzip": true,
"move_subfolder": "sd-concepts-library"
}
]
},
"available": false
},
"clip-vit-large-patch14": {
"type": "dependency",
"optional": false,
"config": {
"files": [
{
"path": "models/clip-vit-large-patch14/config.json"
},
{
"path": "models/clip-vit-large-patch14/merges.txt"
},
{
"path": "models/clip-vit-large-patch14/preprocessor_config.json"
},
{
"path": "models/clip-vit-large-patch14/pytorch_model.bin"
},
{
"path": "models/clip-vit-large-patch14/special_tokens_map.json"
},
{
"path": "models/clip-vit-large-patch14/tokenizer.json"
},
{
"path": "models/clip-vit-large-patch14/tokenizer_config.json"
},
{
"path": "models/clip-vit-large-patch14/vocab.json"
}
],
"download": [
{
"file_name": "config.json",
"file_path": "models/clip-vit-large-patch14",
"file_url": "https://huggingface.co/openai/clip-vit-large-patch14/resolve/main/config.json"
},
{
"file_name": "merges.txt",
"file_path": "models/clip-vit-large-patch14",
"file_url": "https://huggingface.co/openai/clip-vit-large-patch14/resolve/main/merges.txt"
},
{
"file_name": "preprocessor_config.json",
"file_path": "models/clip-vit-large-patch14",
"file_url": "https://huggingface.co/openai/clip-vit-large-patch14/resolve/main/preprocessor_config.json"
},
{
"file_name": "pytorch_model.bin",
"file_path": "models/clip-vit-large-patch14",
"file_url": "https://huggingface.co/openai/clip-vit-large-patch14/resolve/main/pytorch_model.bin"
},
{
"file_name": "special_tokens_map.json",
"file_path": "models/clip-vit-large-patch14",
"file_url": "https://huggingface.co/openai/clip-vit-large-patch14/resolve/main/special_tokens_map.json"
},
{
"file_name": "tokenizer.json",
"file_path": "models/clip-vit-large-patch14",
"file_url": "https://huggingface.co/openai/clip-vit-large-patch14/resolve/main/tokenizer.json"
},
{
"file_name": "tokenizer_config.json",
"file_path": "models/clip-vit-large-patch14",
"file_url": "https://huggingface.co/openai/clip-vit-large-patch14/resolve/main/tokenizer_config.json"
},
{
"file_name": "vocab.json",
"file_path": "models/clip-vit-large-patch14",
"file_url": "https://huggingface.co/openai/clip-vit-large-patch14/resolve/main/vocab.json"
}
]
},
"available": false
}
}

View File

@ -1,10 +1,12 @@
--- ---
title: Windows Installation title: Windows Installation
--- ---
<!--
This file is part of stable-diffusion-webui (https://github.com/sd-webui/stable-diffusion-webui/).
Copyright 2022 sd-webui team. <!--
This file is part of sygil-webui (https://github.com/Sygil-Dev/sygil-webui/).
Copyright 2022 Sygil-Dev team.
This program is free software: you can redistribute it and/or modify This program is free software: you can redistribute it and/or modify
it under the terms of the GNU Affero General Public License as published by it under the terms of the GNU Affero General Public License as published by
the Free Software Foundation, either version 3 of the License, or the Free Software Foundation, either version 3 of the License, or
@ -20,6 +22,7 @@ along with this program. If not, see <http://www.gnu.org/licenses/>.
--> -->
# Initial Setup # Initial Setup
> This is a windows guide. [To install on Linux, see this page.](2.linux-installation.md) > This is a windows guide. [To install on Linux, see this page.](2.linux-installation.md)
## Pre requisites ## Pre requisites
@ -30,19 +33,18 @@ along with this program. If not, see <http://www.gnu.org/licenses/>.
![CleanShot 2022-08-31 at 16 29 48@2x](https://user-images.githubusercontent.com/463317/187796320-e6edbb39-dff1-46a2-a1a1-c4c1875d414c.jpg) ![CleanShot 2022-08-31 at 16 29 48@2x](https://user-images.githubusercontent.com/463317/187796320-e6edbb39-dff1-46a2-a1a1-c4c1875d414c.jpg)
* Download Miniconda3: * Download Miniconda3:
[https://repo.anaconda.com/miniconda/Miniconda3-latest-Windows-x86_64.exe](https://repo.anaconda.com/miniconda/Miniconda3-latest-Windows-x86_64.exe) Get this installed so that you have access to the Miniconda3 Prompt Console. [https://repo.anaconda.com/miniconda/Miniconda3-latest-Windows-x86_64.exe](https://repo.anaconda.com/miniconda/Miniconda3-latest-Windows-x86_64.exe) Get this installed so that you have access to the Miniconda3 Prompt Console.
* Open Miniconda3 Prompt from your start menu after it has been installed * Open Miniconda3 Prompt from your start menu after it has been installed
* _(Optional)_ Create a new text file in your root directory `/stable-diffusion-webui/custom-conda-path.txt` that contains the path to your relevant Miniconda3, for example `C:\Users\<username>\miniconda3` (replace `<username>` with your own username). This is required if you have more than 1 miniconda installation or are using custom installation location. * _(Optional)_ Create a new text file in your root directory `/sygil-webui/custom-conda-path.txt` that contains the path to your relevant Miniconda3, for example `C:\Users\<username>\miniconda3` (replace `<username>` with your own username). This is required if you have more than 1 miniconda installation or are using custom installation location.
## Cloning the repo ## Cloning the repo
Type `git clone https://github.com/sd-webui/stable-diffusion-webui.git` into the prompt. Type `git clone https://github.com/Sygil-Dev/sygil-webui.git` into the prompt.
This will create the `stable-diffusion-webui` directory in your Windows user folder. This will create the `sygil-webui` directory in your Windows user folder.
![CleanShot 2022-08-31 at 16 31 20@2x](https://user-images.githubusercontent.com/463317/187796462-29e5bafd-bbc1-4a48-adc8-7eccc174cb62.jpg) ![CleanShot 2022-08-31 at 16 31 20@2x](https://user-images.githubusercontent.com/463317/187796462-29e5bafd-bbc1-4a48-adc8-7eccc174cb62.jpg)
--- ---
@ -51,32 +53,27 @@ Once a repo has been cloned, updating it is as easy as typing `git pull` inside
![CleanShot 2022-08-31 at 16 36 34@2x](https://user-images.githubusercontent.com/463317/187796970-db94402f-717b-43a8-9c85-270c0cd256c3.jpg) ![CleanShot 2022-08-31 at 16 36 34@2x](https://user-images.githubusercontent.com/463317/187796970-db94402f-717b-43a8-9c85-270c0cd256c3.jpg)
* Next you are going to want to create a Hugging Face account: [https://huggingface.co/](https://huggingface.co/) * Next you are going to want to create a Hugging Face account: [https://huggingface.co/](https://huggingface.co/)
* After you have signed up, and are signed in go to this link and click on Authorize: [https://huggingface.co/CompVis/stable-diffusion-v-1-4-original](https://huggingface.co/CompVis/stable-diffusion-v-1-4-original) * After you have signed up, and are signed in go to this link and click on Authorize: [https://huggingface.co/CompVis/stable-diffusion-v-1-4-original](https://huggingface.co/CompVis/stable-diffusion-v-1-4-original)
* After you have authorized your account, go to this link to download the model weights for version 1.4 of the model, future versions will be released in the same way, and updating them will be a similar process : * After you have authorized your account, go to this link to download the model weights for version 1.4 of the model, future versions will be released in the same way, and updating them will be a similar process :
[https://huggingface.co/CompVis/stable-diffusion-v-1-4-original/resolve/main/sd-v1-4.ckpt](https://huggingface.co/CompVis/stable-diffusion-v-1-4-original/resolve/main/sd-v1-4.ckpt) [https://huggingface.co/CompVis/stable-diffusion-v-1-4-original/resolve/main/sd-v1-4.ckpt](https://huggingface.co/CompVis/stable-diffusion-v-1-4-original/resolve/main/sd-v1-4.ckpt)
* Download the model into this directory: `C:\Users\<username>\sygil-webui\models\ldm\stable-diffusion-v1`
* Download the model into this directory: `C:\Users\<username>\stable-diffusion-webui\models\ldm\stable-diffusion-v1`
* Rename `sd-v1-4.ckpt` to `model.ckpt` once it is inside the stable-diffusion-v1 folder. * Rename `sd-v1-4.ckpt` to `model.ckpt` once it is inside the stable-diffusion-v1 folder.
* Since we are already in our sygil-webui folder in Miniconda, our next step is to create the environment Stable Diffusion needs to work.
* Since we are already in our stable-diffusion-webui folder in Miniconda, our next step is to create the environment Stable Diffusion needs to work. * _(Optional)_ If you already have an environment set up for an installation of Stable Diffusion named ldm open up the `environment.yaml` file in `\sygil-webui\` change the environment name inside of it from `ldm` to `ldo`
* _(Optional)_ If you already have an environment set up for an installation of Stable Diffusion named ldm open up the `environment.yaml` file in `\stable-diffusion-webui\` change the environment name inside of it from `ldm` to `ldo`
--- ---
## First run ## First run
* `webui.cmd` at the root folder (`\stable-diffusion-webui\`) is your main script that you'll always run. It has the functions to automatically do the followings:
* `webui.cmd` at the root folder (`\sygil-webui\`) is your main script that you'll always run. It has the functions to automatically do the followings:
* Create conda env * Create conda env
* Install and update requirements * Install and update requirements
* Run the relauncher and webui.py script for gradio UI options * Run the relauncher and webui.py script for gradio UI options
@ -95,34 +92,36 @@ Once a repo has been cloned, updating it is as easy as typing `git pull` inside
* You should be able to see progress in your `webui.cmd` window. The [http://localhost:7860/](http://localhost:7860/) will be automatically updated to show the final image once progress reach 100% * You should be able to see progress in your `webui.cmd` window. The [http://localhost:7860/](http://localhost:7860/) will be automatically updated to show the final image once progress reach 100%
* Images created with the web interface will be saved to `\stable-diffusion-webui\outputs\` in their respective folders alongside `.yaml` text files with all of the details of your prompts for easy referencing later. Images will also be saved with their seed and numbered so that they can be cross referenced with their `.yaml` files easily. * Images created with the web interface will be saved to `\sygil-webui\outputs\` in their respective folders alongside `.yaml` text files with all of the details of your prompts for easy referencing later. Images will also be saved with their seed and numbered so that they can be cross referenced with their `.yaml` files easily.
--- ---
### Optional additional models ### Optional additional models
There are three more models that we need to download in order to get the most out of the functionality offered by sd-webui. There are three more models that we need to download in order to get the most out of the functionality offered by Sygil-Dev.
> The models are placed inside `src` folder. If you don't have `src` folder inside your root directory it means that you haven't installed the dependencies for your environment yet. [Follow this step](#first-run) before proceeding. > The models are placed inside `src` folder. If you don't have `src` folder inside your root directory it means that you haven't installed the dependencies for your environment yet. [Follow this step](#first-run) before proceeding.
### GFPGAN ### GFPGAN
1. If you want to use GFPGAN to improve generated faces, you need to install it separately.
1. Download [GFPGANv1.3.pth](https://github.com/TencentARC/GFPGAN/releases/download/v1.3.0/GFPGANv1.3.pth) and [GFPGANv1.4.pth](https://github.com/TencentARC/GFPGAN/releases/download/v1.3.4/GFPGANv1.4.pth) and put it into the `/stable-diffusion-webui/models/gfpgan` directory.
1. If you want to use GFPGAN to improve generated faces, you need to install it separately.
2. Download [GFPGANv1.3.pth](https://github.com/TencentARC/GFPGAN/releases/download/v1.3.0/GFPGANv1.3.pth) and [GFPGANv1.4.pth](https://github.com/TencentARC/GFPGAN/releases/download/v1.3.4/GFPGANv1.4.pth) and put it into the `/sygil-webui/models/gfpgan` directory.
### RealESRGAN ### RealESRGAN
1. Download [RealESRGAN_x4plus.pth](https://github.com/xinntao/Real-ESRGAN/releases/download/v0.1.0/RealESRGAN_x4plus.pth) and [RealESRGAN_x4plus_anime_6B.pth](https://github.com/xinntao/Real-ESRGAN/releases/download/v0.2.2.4/RealESRGAN_x4plus_anime_6B.pth). 1. Download [RealESRGAN_x4plus.pth](https://github.com/xinntao/Real-ESRGAN/releases/download/v0.1.0/RealESRGAN_x4plus.pth) and [RealESRGAN_x4plus_anime_6B.pth](https://github.com/xinntao/Real-ESRGAN/releases/download/v0.2.2.4/RealESRGAN_x4plus_anime_6B.pth).
1. Put them into the `stable-diffusion-webui/models/realesrgan` directory. 2. Put them into the `sygil-webui/models/realesrgan` directory.
### LDSR ### LDSR
1. Detailed instructions [here](https://github.com/Hafiidz/latent-diffusion). Brief instruction as follows.
1. Git clone [Hafiidz/latent-diffusion](https://github.com/Hafiidz/latent-diffusion) into your `/stable-diffusion-webui/src/` folder.
1. Run `/stable-diffusion-webui/models/ldsr/download_model.bat` to automatically download and rename the models.
1. Wait until it is done and you can confirm by confirming two new files in `stable-diffusion-webui/models/ldsr/`
1. _(Optional)_ If there are no files there, you can manually download **LDSR** [project.yaml](https://heibox.uni-heidelberg.de/f/31a76b13ea27482981b4/?dl=1) and [model last.cpkt](https://heibox.uni-heidelberg.de/f/578df07c8fc04ffbadf3/?dl=1).
1. Rename last.ckpt to model.ckpt and place both under `stable-diffusion-webui/models/ldsr/`.
1. Refer to [here](https://github.com/sd-webui/stable-diffusion-webui/issues/488) for any issue.
1. Detailed instructions [here](https://github.com/Hafiidz/latent-diffusion). Brief instruction as follows.
2. Git clone [Hafiidz/latent-diffusion](https://github.com/Hafiidz/latent-diffusion) into your `/sygil-webui/src/` folder.
3. Run `/sygil-webui/models/ldsr/download_model.bat` to automatically download and rename the models.
4. Wait until it is done and you can confirm by confirming two new files in `sygil-webui/models/ldsr/`
5. _(Optional)_ If there are no files there, you can manually download **LDSR** [project.yaml](https://heibox.uni-heidelberg.de/f/31a76b13ea27482981b4/?dl=1) and [model last.cpkt](https://heibox.uni-heidelberg.de/f/578df07c8fc04ffbadf3/?dl=1).
6. Rename last.ckpt to model.ckpt and place both under `sygil-webui/models/ldsr/`.
7. Refer to [here](https://github.com/Sygil-Dev/sygil-webui/issues/488) for any issue.
# Credits # Credits
> Modified by [Hafiidz](https://github.com/Hafiidz) with helps from sd-webui discord and team.
> Modified by [Hafiidz](https://github.com/Hafiidz) with helps from Sygil-Dev discord and team.

View File

@ -2,9 +2,9 @@
title: Linux Installation title: Linux Installation
--- ---
<!-- <!--
This file is part of stable-diffusion-webui (https://github.com/sd-webui/stable-diffusion-webui/). This file is part of sygil-webui (https://github.com/Sygil-Dev/sygil-webui/).
Copyright 2022 sd-webui team. Copyright 2022 Sygil-Dev team.
This program is free software: you can redistribute it and/or modify This program is free software: you can redistribute it and/or modify
it under the terms of the GNU Affero General Public License as published by it under the terms of the GNU Affero General Public License as published by
the Free Software Foundation, either version 3 of the License, or the Free Software Foundation, either version 3 of the License, or
@ -42,9 +42,9 @@ along with this program. If not, see <http://www.gnu.org/licenses/>.
**Step 3:** Make the script executable by opening the directory in your Terminal and typing `chmod +x linux-sd.sh`, or whatever you named this file as. **Step 3:** Make the script executable by opening the directory in your Terminal and typing `chmod +x linux-sd.sh`, or whatever you named this file as.
**Step 4:** Run the script with `./linux-sd.sh`, it will begin by cloning the [WebUI Github Repo](https://github.com/sd-webui/stable-diffusion-webui) to the directory the script is located in. This folder will be named `stable-diffusion-webui`. **Step 4:** Run the script with `./linux-sd.sh`, it will begin by cloning the [WebUI Github Repo](https://github.com/Sygil-Dev/sygil-webui) to the directory the script is located in. This folder will be named `sygil-webui`.
**Step 5:** The script will pause and ask that you move/copy the downloaded 1.4 AI models to the `stable-diffusion-webui` folder. Press Enter once you have done so to continue. **Step 5:** The script will pause and ask that you move/copy the downloaded 1.4 AI models to the `sygil-webui` folder. Press Enter once you have done so to continue.
**If you are running low on storage space, you can just move the 1.4 AI models file directly to this directory, it will not be deleted, simply moved and renamed. However my personal suggestion is to just **copy** it to the repo folder, in case you desire to delete and rebuild your Stable Diffusion build again.** **If you are running low on storage space, you can just move the 1.4 AI models file directly to this directory, it will not be deleted, simply moved and renamed. However my personal suggestion is to just **copy** it to the repo folder, in case you desire to delete and rebuild your Stable Diffusion build again.**
@ -76,7 +76,7 @@ The user will have the ability to set these to yes or no using the menu choices.
- Uses An Older Interface Style - Uses An Older Interface Style
- Will Not Receive Major Updates - Will Not Receive Major Updates
**Step 9:** If everything has gone successfully, either a new browser window will open with the Streamlit version, or you should see `Running on local URL: http://localhost:7860/` in your Terminal if you launched the Gradio Interface version. Generated images will be located in the `outputs` directory inside of `stable-diffusion-webui`. Enjoy the definitive Stable Diffusion WebUI experience on Linux! :) **Step 9:** If everything has gone successfully, either a new browser window will open with the Streamlit version, or you should see `Running on local URL: http://localhost:7860/` in your Terminal if you launched the Gradio Interface version. Generated images will be located in the `outputs` directory inside of `sygil-webui`. Enjoy the definitive Stable Diffusion WebUI experience on Linux! :)
## Ultimate Stable Diffusion Customizations ## Ultimate Stable Diffusion Customizations
@ -87,7 +87,7 @@ If the user chooses to Customize their setup, then they will be presented with t
- Update the Stable Diffusion WebUI fork from the GitHub Repo - Update the Stable Diffusion WebUI fork from the GitHub Repo
- Customize the launch arguments for Gradio Interface version of Stable Diffusion (See Above) - Customize the launch arguments for Gradio Interface version of Stable Diffusion (See Above)
### Refer back to the original [WebUI Github Repo](https://github.com/sd-webui/stable-diffusion-webui) for useful tips and links to other resources that can improve your Stable Diffusion experience ### Refer back to the original [WebUI Github Repo](https://github.com/Sygil-Dev/sygil-webui) for useful tips and links to other resources that can improve your Stable Diffusion experience
## Planned Additions ## Planned Additions
- Investigate ways to handle Anaconda automatic installation on a user's system. - Investigate ways to handle Anaconda automatic installation on a user's system.

View File

@ -2,7 +2,7 @@
title: Running Stable Diffusion WebUI Using Docker title: Running Stable Diffusion WebUI Using Docker
--- ---
<!-- <!--
This file is part of stable-diffusion-webui (https://github.com/sd-webui/stable-diffusion-webui/). This file is part of sygil-webui (https://github.com/Sygil-Dev/sygil-webui/).
Copyright 2022 sd-webui team. Copyright 2022 sd-webui team.
This program is free software: you can redistribute it and/or modify This program is free software: you can redistribute it and/or modify
@ -69,7 +69,7 @@ Additional Requirements:
Other Notes: Other Notes:
* "Optional" packages commonly used with Stable Diffusion WebUI workflows such as, RealESRGAN, GFPGAN, will be installed by default. * "Optional" packages commonly used with Stable Diffusion WebUI workflows such as, RealESRGAN, GFPGAN, will be installed by default.
* An older version of running Stable Diffusion WebUI using Docker exists here: https://github.com/sd-webui/stable-diffusion-webui/discussions/922 * An older version of running Stable Diffusion WebUI using Docker exists here: https://github.com/Sygil-Dev/sygil-webui/discussions/922
### But what about AMD? ### But what about AMD?
There is tentative support for AMD GPUs through docker which can be enabled via `docker-compose.amd.yml`, There is tentative support for AMD GPUs through docker which can be enabled via `docker-compose.amd.yml`,
@ -91,7 +91,7 @@ in your `.profile` or through a tool like `direnv`
### Clone Repository ### Clone Repository
* Clone this repository to your host machine: * Clone this repository to your host machine:
* `git clone https://github.com/sd-webui/stable-diffusion-webui.git` * `git clone https://github.com/Sygil-Dev/sygil-webui.git`
* If you plan to use Docker Compose to run the image in a container (most users), create an `.env_docker` file using the example file: * If you plan to use Docker Compose to run the image in a container (most users), create an `.env_docker` file using the example file:
* `cp .env_docker.example .env_docker` * `cp .env_docker.example .env_docker`
* Edit `.env_docker` using the text editor of your choice. * Edit `.env_docker` using the text editor of your choice.
@ -105,7 +105,7 @@ The default `docker-compose.yml` file will create a Docker container instance n
* Create an instance of the Stable Diffusion WebUI image as a Docker container: * Create an instance of the Stable Diffusion WebUI image as a Docker container:
* `docker compose up` * `docker compose up`
* During the first run, the container image will be build containing all of the dependencies necessary to run Stable Diffusion. This build process will take several minutes to complete * During the first run, the container image will be build containing all of the dependencies necessary to run Stable Diffusion. This build process will take several minutes to complete
* After the image build has completed, you will have a docker image for running the Stable Diffusion WebUI tagged `stable-diffusion-webui:dev` * After the image build has completed, you will have a docker image for running the Stable Diffusion WebUI tagged `sygil-webui:dev`
(Optional) Daemon mode: (Optional) Daemon mode:
* You can start the container in "daemon" mode by applying the `-d` option: `docker compose up -d`. This will run the server in the background so you can close your console window without losing your work. * You can start the container in "daemon" mode by applying the `-d` option: `docker compose up -d`. This will run the server in the background so you can close your console window without losing your work.
@ -160,9 +160,9 @@ You will need to re-download all associated model files/weights used by Stable D
* `docker exec -it st-webui /bin/bash` * `docker exec -it st-webui /bin/bash`
* `docker compose exec stable-diffusion bash` * `docker compose exec stable-diffusion bash`
* To start a container using the Stable Diffusion WebUI Docker image without Docker Compose, you can do so with the following command: * To start a container using the Stable Diffusion WebUI Docker image without Docker Compose, you can do so with the following command:
* `docker run --rm -it --entrypoint /bin/bash stable-diffusion-webui:dev` * `docker run --rm -it --entrypoint /bin/bash sygil-webui:dev`
* To start a container, with mapped ports, GPU resource access, and a local directory bound as a container volume, you can do so with the following command: * To start a container, with mapped ports, GPU resource access, and a local directory bound as a container volume, you can do so with the following command:
* `docker run --rm -it -p 8501:8501 -p 7860:7860 --gpus all -v $(pwd):/sd --entrypoint /bin/bash stable-diffusion-webui:dev` * `docker run --rm -it -p 8501:8501 -p 7860:7860 --gpus all -v $(pwd):/sd --entrypoint /bin/bash sygil-webui:dev`
--- ---

View File

@ -2,9 +2,9 @@
title: Streamlit Web UI Interface title: Streamlit Web UI Interface
--- ---
<!-- <!--
This file is part of stable-diffusion-webui (https://github.com/sd-webui/stable-diffusion-webui/). This file is part of sygil-webui (https://github.com/Sygil-Dev/sygil-webui/).
Copyright 2022 sd-webui team. Copyright 2022 Sygil-Dev team.
This program is free software: you can redistribute it and/or modify This program is free software: you can redistribute it and/or modify
it under the terms of the GNU Affero General Public License as published by it under the terms of the GNU Affero General Public License as published by
the Free Software Foundation, either version 3 of the License, or the Free Software Foundation, either version 3 of the License, or
@ -94,7 +94,7 @@ Streamlit Image2Image allows for you to take an image, be it generated by Stable
The Concept Library allows for the easy usage of custom textual inversion models. These models may be loaded into `models/custom/sd-concepts-library` and will appear in the Concepts Library in Streamlit. To use one of these custom models in a prompt, either copy it using the button on the model, or type `<model-name>` in the prompt where you wish to use it. The Concept Library allows for the easy usage of custom textual inversion models. These models may be loaded into `models/custom/sd-concepts-library` and will appear in the Concepts Library in Streamlit. To use one of these custom models in a prompt, either copy it using the button on the model, or type `<model-name>` in the prompt where you wish to use it.
Please see the [Concepts Library](https://github.com/sd-webui/stable-diffusion-webui/blob/master/docs/7.concepts-library.md) section to learn more about how to use these tools. Please see the [Concepts Library](https://github.com/Sygil-Dev/sygil-webui/blob/master/docs/7.concepts-library.md) section to learn more about how to use these tools.
## Textual Inversion ## Textual Inversion
--- ---

View File

@ -2,9 +2,9 @@
title: Gradio Web UI Interface title: Gradio Web UI Interface
--- ---
<!-- <!--
This file is part of stable-diffusion-webui (https://github.com/sd-webui/stable-diffusion-webui/). This file is part of sygil-webui (https://github.com/Sygil-Dev/sygil-webui/).
Copyright 2022 sd-webui team. Copyright 2022 Sygil-Dev team.
This program is free software: you can redistribute it and/or modify This program is free software: you can redistribute it and/or modify
it under the terms of the GNU Affero General Public License as published by it under the terms of the GNU Affero General Public License as published by
the Free Software Foundation, either version 3 of the License, or the Free Software Foundation, either version 3 of the License, or

View File

@ -2,9 +2,9 @@
title: Upscalers title: Upscalers
--- ---
<!-- <!--
This file is part of stable-diffusion-webui (https://github.com/sd-webui/stable-diffusion-webui/). This file is part of sygil-webui (https://github.com/Sygil-Dev/sygil-webui/).
Copyright 2022 sd-webui team. Copyright 2022 Sygil-Dev team.
This program is free software: you can redistribute it and/or modify This program is free software: you can redistribute it and/or modify
it under the terms of the GNU Affero General Public License as published by it under the terms of the GNU Affero General Public License as published by
the Free Software Foundation, either version 3 of the License, or the Free Software Foundation, either version 3 of the License, or
@ -32,7 +32,7 @@ GFPGAN is designed to help restore faces in Stable Diffusion outputs. If you hav
If you want to use GFPGAN to improve generated faces, you need to download the models for it seperately if you are on Windows or doing so manually on Linux. If you want to use GFPGAN to improve generated faces, you need to download the models for it seperately if you are on Windows or doing so manually on Linux.
Download [GFPGANv1.3.pth](https://github.com/TencentARC/GFPGAN/releases/download/v1.3.0/GFPGANv1.3.pth) and put it Download [GFPGANv1.3.pth](https://github.com/TencentARC/GFPGAN/releases/download/v1.3.0/GFPGANv1.3.pth) and put it
into the `/stable-diffusion-webui/models/gfpgan` directory after you have setup the conda environment for the first time. into the `/sygil-webui/models/gfpgan` directory after you have setup the conda environment for the first time.
## RealESRGAN ## RealESRGAN
--- ---
@ -42,7 +42,7 @@ RealESRGAN is a 4x upscaler built into both versions of the Web UI interface. It
If you want to use RealESRGAN to upscale your images, you need to download the models for it seperately if you are on Windows or doing so manually on Linux. If you want to use RealESRGAN to upscale your images, you need to download the models for it seperately if you are on Windows or doing so manually on Linux.
Download [RealESRGAN_x4plus.pth](https://github.com/xinntao/Real-ESRGAN/releases/download/v0.1.0/RealESRGAN_x4plus.pth) and [RealESRGAN_x4plus_anime_6B.pth](https://github.com/xinntao/Real-ESRGAN/releases/download/v0.2.2.4/RealESRGAN_x4plus_anime_6B.pth). Download [RealESRGAN_x4plus.pth](https://github.com/xinntao/Real-ESRGAN/releases/download/v0.1.0/RealESRGAN_x4plus.pth) and [RealESRGAN_x4plus_anime_6B.pth](https://github.com/xinntao/Real-ESRGAN/releases/download/v0.2.2.4/RealESRGAN_x4plus_anime_6B.pth).
Put them into the `stable-diffusion-webui/models/realesrgan` directory after you have setup the conda environment for the first time. Put them into the `sygil-webui/models/realesrgan` directory after you have setup the conda environment for the first time.
## GoBig (Gradio only currently) ## GoBig (Gradio only currently)
--- ---
@ -57,7 +57,7 @@ To use GoBig, you will need to download the RealESRGAN models as directed above.
LSDR is a 4X upscaler with high VRAM usage that uses a Latent Diffusion model to upscale the image. This will accentuate the details of an image, but won't change the composition. This might introduce sharpening, but it is great for textures or compositions with plenty of details. However, it is slower and will use more VRAM. LSDR is a 4X upscaler with high VRAM usage that uses a Latent Diffusion model to upscale the image. This will accentuate the details of an image, but won't change the composition. This might introduce sharpening, but it is great for textures or compositions with plenty of details. However, it is slower and will use more VRAM.
If you want to use LSDR to upscale your images, you need to download the models for it seperately if you are on Windows or doing so manually on Linux. If you want to use LSDR to upscale your images, you need to download the models for it seperately if you are on Windows or doing so manually on Linux.
Download the LDSR [project.yaml](https://heibox.uni-heidelberg.de/f/31a76b13ea27482981b4/?dl=1) and [ model last.cpkt](https://heibox.uni-heidelberg.de/f/578df07c8fc04ffbadf3/?dl=1). Rename `last.ckpt` to `model.ckpt` and place both in the `stable-diffusion-webui/models/ldsr` directory after you have setup the conda environment for the first time. Download the LDSR [project.yaml](https://heibox.uni-heidelberg.de/f/31a76b13ea27482981b4/?dl=1) and [ model last.cpkt](https://heibox.uni-heidelberg.de/f/578df07c8fc04ffbadf3/?dl=1). Rename `last.ckpt` to `model.ckpt` and place both in the `sygil-webui/models/ldsr` directory after you have setup the conda environment for the first time.
## GoLatent (Gradio only currently) ## GoLatent (Gradio only currently)
--- ---

View File

@ -1,7 +1,7 @@
<!-- <!--
This file is part of stable-diffusion-webui (https://github.com/sd-webui/stable-diffusion-webui/). This file is part of sygil-webui (https://github.com/Sygil-Dev/sygil-webui/).
Copyright 2022 sd-webui team. Copyright 2022 Sygil-Dev team.
This program is free software: you can redistribute it and/or modify This program is free software: you can redistribute it and/or modify
it under the terms of the GNU Affero General Public License as published by it under the terms of the GNU Affero General Public License as published by
the Free Software Foundation, either version 3 of the License, or the Free Software Foundation, either version 3 of the License, or

View File

@ -2,9 +2,9 @@
title: Custom models title: Custom models
--- ---
<!-- <!--
This file is part of stable-diffusion-webui (https://github.com/sd-webui/stable-diffusion-webui/). This file is part of sygil-webui (https://github.com/Sygil-Dev/sygil-webui/).
Copyright 2022 sd-webui team. Copyright 2022 Sygil-Dev team.
This program is free software: you can redistribute it and/or modify This program is free software: you can redistribute it and/or modify
it under the terms of the GNU Affero General Public License as published by it under the terms of the GNU Affero General Public License as published by
the Free Software Foundation, either version 3 of the License, or the Free Software Foundation, either version 3 of the License, or

View File

@ -1,7 +1,7 @@
#!/bin/bash #!/bin/bash
# This file is part of stable-diffusion-webui (https://github.com/sd-webui/stable-diffusion-webui/). # This file is part of sygil-webui (https://github.com/Sygil-Dev/sygil-webui/).
# Copyright 2022 sd-webui team. # Copyright 2022 Sygil-Dev team.
# This program is free software: you can redistribute it and/or modify # This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as published by # it under the terms of the GNU Affero General Public License as published by
# the Free Software Foundation, either version 3 of the License, or # the Free Software Foundation, either version 3 of the License, or
@ -111,7 +111,7 @@ if [[ -e "${MODEL_DIR}/sd-concepts-library" ]]; then
else else
# concept library does not exist, clone # concept library does not exist, clone
cd ${MODEL_DIR} cd ${MODEL_DIR}
git clone https://github.com/sd-webui/sd-concepts-library.git git clone https://github.com/Sygil-Dev/sd-concepts-library.git
fi fi
# create directory and link concepts library # create directory and link concepts library
mkdir -p ${SCRIPT_DIR}/models/custom mkdir -p ${SCRIPT_DIR}/models/custom

View File

@ -1,7 +1,7 @@
name: ldm name: ldm
# This file is part of stable-diffusion-webui (https://github.com/sd-webui/stable-diffusion-webui/). # This file is part of sygil-webui (https://github.com/Sygil-Dev/sygil-webui/).
# Copyright 2022 sd-webui team. # Copyright 2022 Sygil-Dev team.
# This program is free software: you can redistribute it and/or modify # This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as published by # it under the terms of the GNU Affero General Public License as published by
# the Free Software Foundation, either version 3 of the License, or # the Free Software Foundation, either version 3 of the License, or

View File

@ -1,7 +1,7 @@
/* /*
This file is part of stable-diffusion-webui (https://github.com/sd-webui/stable-diffusion-webui/). This file is part of sygil-webui (https://github.com/Sygil-Dev/sygil-webui/).
Copyright 2022 sd-webui team. Copyright 2022 Sygil-Dev team.
This program is free software: you can redistribute it and/or modify This program is free software: you can redistribute it and/or modify
it under the terms of the GNU Affero General Public License as published by it under the terms of the GNU Affero General Public License as published by
the Free Software Foundation, either version 3 of the License, or the Free Software Foundation, either version 3 of the License, or

View File

@ -1,7 +1,7 @@
/* /*
This file is part of stable-diffusion-webui (https://github.com/sd-webui/stable-diffusion-webui/). This file is part of sygil-webui (https://github.com/Sygil-Dev/sygil-webui/).
Copyright 2022 sd-webui team. Copyright 2022 Sygil-Dev team.
This program is free software: you can redistribute it and/or modify This program is free software: you can redistribute it and/or modify
it under the terms of the GNU Affero General Public License as published by it under the terms of the GNU Affero General Public License as published by
the Free Software Foundation, either version 3 of the License, or the Free Software Foundation, either version 3 of the License, or
@ -26,10 +26,11 @@ button[data-baseweb="tab"] {
} }
/* Image Container (only appear after run finished)//center the image, especially better looks in wide screen */ /* Image Container (only appear after run finished)//center the image, especially better looks in wide screen */
.css-du1fp8 { .css-1kyxreq{
justify-content: center; justify-content: center;
} }
/* Streamlit header */ /* Streamlit header */
.css-1avcm0n { .css-1avcm0n {
background-color: transparent; background-color: transparent;
@ -135,6 +136,7 @@ div.gallery:hover {
/******************************************************************** /********************************************************************
Hide anchor links on titles Hide anchor links on titles
*********************************************************************/ *********************************************************************/
/*
.css-15zrgzn { .css-15zrgzn {
display: none display: none
} }
@ -145,8 +147,32 @@ div.gallery:hover {
display: none display: none
} }
/* Make the text area widget have a similar height as the text input field*/ /* Make the text area widget have a similar height as the text input field */
.st-ex{ .st-dy{
height: 54px; height: 54px;
min-height: 25px; min-height: 25px;
} }
.css-17useex{
gap: 3px;
}
/* Remove some empty spaces to make the UI more compact. */
.css-18e3th9{
padding-left: 10px;
padding-right: 10px;
position: unset !important; /* Fixes the layout/page going up when an expander or another item is expanded and then collapsed */
}
.css-k1vhr4{
padding-top: initial;
}
.css-ret2ud{
padding-left: 10px;
padding-right: 25px;
gap: initial;
display: initial;
}
.css-w5z5an{
gap: 1px;
}

View File

@ -1,7 +1,7 @@
/* /*
This file is part of stable-diffusion-webui (https://github.com/sd-webui/stable-diffusion-webui/). This file is part of sygil-webui (https://github.com/Sygil-Dev/sygil-webui/).
Copyright 2022 sd-webui team. Copyright 2022 Sygil-Dev team.
This program is free software: you can redistribute it and/or modify This program is free software: you can redistribute it and/or modify
it under the terms of the GNU Affero General Public License as published by it under the terms of the GNU Affero General Public License as published by
the Free Software Foundation, either version 3 of the License, or the Free Software Foundation, either version 3 of the License, or

View File

@ -1,6 +1,6 @@
# This file is part of stable-diffusion-webui (https://github.com/sd-webui/stable-diffusion-webui/). # This file is part of sygil-webui (https://github.com/Sygil-Dev/sygil-webui/).
# Copyright 2022 sd-webui team. # Copyright 2022 Sygil-Dev team.
# This program is free software: you can redistribute it and/or modify # This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as published by # it under the terms of the GNU Affero General Public License as published by
# the Free Software Foundation, either version 3 of the License, or # the Free Software Foundation, either version 3 of the License, or

View File

@ -1,6 +1,6 @@
# This file is part of stable-diffusion-webui (https://github.com/sd-webui/stable-diffusion-webui/). # This file is part of sygil-webui (https://github.com/Sygil-Dev/sygil-webui/).
# Copyright 2022 sd-webui team. # Copyright 2022 Sygil-Dev team.
# This program is free software: you can redistribute it and/or modify # This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as published by # it under the terms of the GNU Affero General Public License as published by
# the Free Software Foundation, either version 3 of the License, or # the Free Software Foundation, either version 3 of the License, or
@ -499,11 +499,11 @@ def draw_gradio_ui(opt, img2img=lambda x: x, txt2img=lambda x: x, imgproc=lambda
if GFPGAN is None: if GFPGAN is None:
gr.HTML(""" gr.HTML("""
<div id="90" style="max-width: 100%; font-size: 14px; text-align: center;" class="output-markdown gr-prose border-solid border border-gray-200 rounded gr-panel"> <div id="90" style="max-width: 100%; font-size: 14px; text-align: center;" class="output-markdown gr-prose border-solid border border-gray-200 rounded gr-panel">
<p><b> Please download GFPGAN to activate face fixing features</b>, instructions are available at the <a href='https://github.com/hlky/stable-diffusion-webui'>Github</a></p> <p><b> Please download GFPGAN to activate face fixing features</b>, instructions are available at the <a href='https://github.com/Sygil-Dev/sygil-webui'>Github</a></p>
</div> </div>
""") """)
# gr.Markdown("") # gr.Markdown("")
# gr.Markdown("<b> Please download GFPGAN to activate face fixing features</b>, instructions are available at the <a href='https://github.com/hlky/stable-diffusion-webui'>Github</a>") # gr.Markdown("<b> Please download GFPGAN to activate face fixing features</b>, instructions are available at the <a href='https://github.com/Sygil-Dev/sygil-webui'>Github</a>")
with gr.Column(): with gr.Column():
gr.Markdown("<b>GFPGAN Settings</b>") gr.Markdown("<b>GFPGAN Settings</b>")
imgproc_gfpgan_strength = gr.Slider(minimum=0.0, maximum=1.0, step=0.001, imgproc_gfpgan_strength = gr.Slider(minimum=0.0, maximum=1.0, step=0.001,
@ -517,7 +517,7 @@ def draw_gradio_ui(opt, img2img=lambda x: x, txt2img=lambda x: x, imgproc=lambda
else: else:
gr.HTML(""" gr.HTML("""
<div id="90" style="max-width: 100%; font-size: 14px; text-align: center;" class="output-markdown gr-prose border-solid border border-gray-200 rounded gr-panel"> <div id="90" style="max-width: 100%; font-size: 14px; text-align: center;" class="output-markdown gr-prose border-solid border border-gray-200 rounded gr-panel">
<p><b> Please download LDSR to activate more upscale features</b>, instructions are available at the <a href='https://github.com/hlky/stable-diffusion-webui'>Github</a></p> <p><b> Please download LDSR to activate more upscale features</b>, instructions are available at the <a href='https://github.com/Sygil-Dev/sygil-webui'>Github</a></p>
</div> </div>
""") """)
upscaleModes = ['RealESRGAN', 'GoBig'] upscaleModes = ['RealESRGAN', 'GoBig']
@ -627,7 +627,7 @@ def draw_gradio_ui(opt, img2img=lambda x: x, txt2img=lambda x: x, imgproc=lambda
# seperator # seperator
gr.HTML(""" gr.HTML("""
<div id="90" style="max-width: 100%; font-size: 14px; text-align: center;" class="output-markdown gr-prose border-solid border border-gray-200 rounded gr-panel"> <div id="90" style="max-width: 100%; font-size: 14px; text-align: center;" class="output-markdown gr-prose border-solid border border-gray-200 rounded gr-panel">
<p><b> Please download RealESRGAN to activate upscale features</b>, instructions are available at the <a href='https://github.com/hlky/stable-diffusion-webui'>Github</a></p> <p><b> Please download RealESRGAN to activate upscale features</b>, instructions are available at the <a href='https://github.com/Sygil-Dev/sygil-webui'>Github</a></p>
</div> </div>
""") """)
imgproc_toggles.change(fn=uifn.toggle_options_gfpgan, inputs=[imgproc_toggles], outputs=[gfpgan_group]) imgproc_toggles.change(fn=uifn.toggle_options_gfpgan, inputs=[imgproc_toggles], outputs=[gfpgan_group])
@ -860,9 +860,9 @@ def draw_gradio_ui(opt, img2img=lambda x: x, txt2img=lambda x: x, imgproc=lambda
""" """
gr.HTML(""" gr.HTML("""
<div id="90" style="max-width: 100%; font-size: 14px; text-align: center;" class="output-markdown gr-prose border-solid border border-gray-200 rounded gr-panel"> <div id="90" style="max-width: 100%; font-size: 14px; text-align: center;" class="output-markdown gr-prose border-solid border border-gray-200 rounded gr-panel">
<p>For help and advanced usage guides, visit the <a href="https://github.com/hlky/stable-diffusion-webui/wiki" target="_blank">Project Wiki</a></p> <p>For help and advanced usage guides, visit the <a href="https://github.com/Sygil-Dev/sygil-webui/wiki" target="_blank">Project Wiki</a></p>
<p>Stable Diffusion WebUI is an open-source project. You can find the latest stable builds on the <a href="https://github.com/hlky/stable-diffusion" target="_blank">main repository</a>. <p>Stable Diffusion WebUI is an open-source project. You can find the latest stable builds on the <a href="https://github.com/Sygil-Dev/stable-diffusion" target="_blank">main repository</a>.
If you would like to contribute to development or test bleeding edge builds, you can visit the <a href="https://github.com/hlky/stable-diffusion-webui" target="_blank">developement repository</a>.</p> If you would like to contribute to development or test bleeding edge builds, you can visit the <a href="https://github.com/Sygil-Dev/sygil-webui" target="_blank">developement repository</a>.</p>
<p>Device ID {current_device_index}: {current_device_name}<br/>{total_device_count} total devices</p> <p>Device ID {current_device_index}: {current_device_name}<br/>{total_device_count} total devices</p>
</div> </div>
""".format(current_device_name=torch.cuda.get_device_name(), current_device_index=torch.cuda.current_device(), total_device_count=torch.cuda.device_count())) """.format(current_device_name=torch.cuda.get_device_name(), current_device_index=torch.cuda.current_device(), total_device_count=torch.cuda.device_count()))

View File

@ -1,6 +1,6 @@
# This file is part of stable-diffusion-webui (https://github.com/sd-webui/stable-diffusion-webui/). # This file is part of sygil-webui (https://github.com/Sygil-Dev/sygil-webui/).
# Copyright 2022 sd-webui team. # Copyright 2022 Sygil-Dev team.
# This program is free software: you can redistribute it and/or modify # This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as published by # it under the terms of the GNU Affero General Public License as published by
# the Free Software Foundation, either version 3 of the License, or # the Free Software Foundation, either version 3 of the License, or

View File

@ -1,6 +1,6 @@
# This file is part of stable-diffusion-webui (https://github.com/sd-webui/stable-diffusion-webui/). # This file is part of sygil-webui (https://github.com/Sygil-Dev/sygil-webui/).
# Copyright 2022 sd-webui team. # Copyright 2022 Sygil-Dev team.
# This program is free software: you can redistribute it and/or modify # This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as published by # it under the terms of the GNU Affero General Public License as published by
# the Free Software Foundation, either version 3 of the License, or # the Free Software Foundation, either version 3 of the License, or

View File

@ -1,6 +1,6 @@
# This file is part of stable-diffusion-webui (https://github.com/sd-webui/stable-diffusion-webui/). # This file is part of sygil-webui (https://github.com/Sygil-Dev/sygil-webui/).
# Copyright 2022 sd-webui team. # Copyright 2022 Sygil-Dev team.
# This program is free software: you can redistribute it and/or modify # This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as published by # it under the terms of the GNU Affero General Public License as published by
# the Free Software Foundation, either version 3 of the License, or # the Free Software Foundation, either version 3 of the License, or

View File

@ -1,7 +1,7 @@
@echo off @echo off
:: This file is part of stable-diffusion-webui (https://github.com/sd-webui/stable-diffusion-webui/). :: This file is part of sygil-webui (https://github.com/Sygil-Dev/sygil-webui/).
:: Copyright 2022 sd-webui team. :: Copyright 2022 Sygil-Dev team.
:: This program is free software: you can redistribute it and/or modify :: This program is free software: you can redistribute it and/or modify
:: it under the terms of the GNU Affero General Public License as published by :: it under the terms of the GNU Affero General Public License as published by
:: the Free Software Foundation, either version 3 of the License, or :: the Free Software Foundation, either version 3 of the License, or

View File

@ -1,7 +1,7 @@
#!/bin/bash -i #!/bin/bash -i
# This file is part of stable-diffusion-webui (https://github.com/sd-webui/stable-diffusion-webui/). # This file is part of sygil-webui (https://github.com/Sygil-Dev/sygil-webui/).
# Copyright 2022 sd-webui team. # Copyright 2022 Sygil-Dev team.
# This program is free software: you can redistribute it and/or modify # This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as published by # it under the terms of the GNU Affero General Public License as published by
# the Free Software Foundation, either version 3 of the License, or # the Free Software Foundation, either version 3 of the License, or
@ -30,7 +30,7 @@ LSDR_CONFIG="https://heibox.uni-heidelberg.de/f/31a76b13ea27482981b4/?dl=1"
LSDR_MODEL="https://heibox.uni-heidelberg.de/f/578df07c8fc04ffbadf3/?dl=1" LSDR_MODEL="https://heibox.uni-heidelberg.de/f/578df07c8fc04ffbadf3/?dl=1"
REALESRGAN_MODEL="https://github.com/xinntao/Real-ESRGAN/releases/download/v0.1.0/RealESRGAN_x4plus.pth" REALESRGAN_MODEL="https://github.com/xinntao/Real-ESRGAN/releases/download/v0.1.0/RealESRGAN_x4plus.pth"
REALESRGAN_ANIME_MODEL="https://github.com/xinntao/Real-ESRGAN/releases/download/v0.2.2.4/RealESRGAN_x4plus_anime_6B.pth" REALESRGAN_ANIME_MODEL="https://github.com/xinntao/Real-ESRGAN/releases/download/v0.2.2.4/RealESRGAN_x4plus_anime_6B.pth"
SD_CONCEPT_REPO="https://github.com/sd-webui/sd-concepts-library/archive/refs/heads/main.zip" SD_CONCEPT_REPO="https://github.com/Sygil-Dev/sd-concepts-library/archive/refs/heads/main.zip"
if [[ -f $ENV_MODIFED_FILE ]]; then if [[ -f $ENV_MODIFED_FILE ]]; then
@ -91,7 +91,7 @@ sd_model_loading () {
printf "AI Model already in place. Continuing...\n\n" printf "AI Model already in place. Continuing...\n\n"
else else
printf "\n\n########## MOVE MODEL FILE ##########\n\n" printf "\n\n########## MOVE MODEL FILE ##########\n\n"
printf "Please download the 1.4 AI Model from Huggingface (or another source) and place it inside of the stable-diffusion-webui folder\n\n" printf "Please download the 1.4 AI Model from Huggingface (or another source) and place it inside of the sygil-webui folder\n\n"
read -p "Once you have sd-v1-4.ckpt in the project root, Press Enter...\n\n" read -p "Once you have sd-v1-4.ckpt in the project root, Press Enter...\n\n"
# Check to make sure checksum of models is the original one from HuggingFace and not a fake model set # Check to make sure checksum of models is the original one from HuggingFace and not a fake model set

View File

@ -0,0 +1,55 @@
import k_diffusion as K
import torch
import torch.nn as nn
class KDiffusionSampler:
def __init__(self, m, sampler, callback=None):
self.model = m
self.model_wrap = K.external.CompVisDenoiser(m)
self.schedule = sampler
self.generation_callback = callback
def get_sampler_name(self):
return self.schedule
def sample(self, S, conditioning, unconditional_guidance_scale, unconditional_conditioning, x_T):
sigmas = self.model_wrap.get_sigmas(S)
x = x_T * sigmas[0]
model_wrap_cfg = CFGDenoiser(self.model_wrap)
samples_ddim = None
samples_ddim = K.sampling.__dict__[f'sample_{self.schedule}'](
model_wrap_cfg, x, sigmas,
extra_args={'cond': conditioning, 'uncond': unconditional_conditioning,'cond_scale': unconditional_guidance_scale},
disable=False, callback=self.generation_callback)
#
return samples_ddim, None
class CFGMaskedDenoiser(nn.Module):
def __init__(self, model):
super().__init__()
self.inner_model = model
def forward(self, x, sigma, uncond, cond, cond_scale, mask, x0, xi):
x_in = x
x_in = torch.cat([x_in] * 2)
sigma_in = torch.cat([sigma] * 2)
cond_in = torch.cat([uncond, cond])
uncond, cond = self.inner_model(x_in, sigma_in, cond=cond_in).chunk(2)
denoised = uncond + (cond - uncond) * cond_scale
if mask is not None:
assert x0 is not None
img_orig = x0
mask_inv = 1. - mask
denoised = (img_orig * mask_inv) + (mask * denoised)
return denoised
class CFGDenoiser(nn.Module):
def __init__(self, model):
super().__init__()
self.inner_model = model
def forward(self, x, sigma, uncond, cond, cond_scale):
x_in = torch.cat([x] * 2)
sigma_in = torch.cat([sigma] * 2)
cond_in = torch.cat([uncond, cond])
uncond, cond = self.inner_model(x_in, sigma_in, cond=cond_in).chunk(2)
return uncond + (cond - uncond) * cond_scale

View File

@ -34,10 +34,15 @@ streamlit-tensorboard==0.0.2
hydralit==1.0.14 hydralit==1.0.14
hydralit_components==1.0.10 hydralit_components==1.0.10
stqdm==0.0.4 stqdm==0.0.4
uvicorn
fastapi
jsonmerge==1.8.
matplotlib==3.6.
resize-right==0.0.2
torchdiffeq==0.2.3
# txt2vid # txt2vid
stable-diffusion-videos==0.5.3 diffusers==0.6.0
diffusers==0.4
librosa==0.9.2 librosa==0.9.2
# img2img inpainting # img2img inpainting
@ -51,11 +56,11 @@ timm==0.6.7
tqdm==4.64.0 tqdm==4.64.0
tensorboard==2.10.1 tensorboard==2.10.1
# Other # Other
retry==0.9.2 # used by sd_utils retry==0.9.2 # used by sd_utils
python-slugify==6.1.2 # used by sd_utils python-slugify==6.1.2 # used by sd_utils
piexif==1.1.3 # used by sd_utils piexif==1.1.3 # used by sd_utils
pywebview==3.6.3 # used by streamlit_webview.py
accelerate==0.12.0 accelerate==0.12.0
albumentations==0.4.3 albumentations==0.4.3

View File

@ -1,7 +1,7 @@
#!/bin/bash #!/bin/bash
# This file is part of stable-diffusion-webui (https://github.com/sd-webui/stable-diffusion-webui/). # This file is part of sygil-webui (https://github.com/Sygil-Dev/sygil-webui/).
# Copyright 2022 sd-webui team. # Copyright 2022 Sygil-Dev team.
# This program is free software: you can redistribute it and/or modify # This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as published by # it under the terms of the GNU Affero General Public License as published by
# the Free Software Foundation, either version 3 of the License, or # the Free Software Foundation, either version 3 of the License, or

36
scripts/APIServer.py Normal file
View File

@ -0,0 +1,36 @@
# This file is part of sygil-webui (https://github.com/Sygil-Dev/sygil-webui/).
# Copyright 2022 Sygil-Dev team.
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
# You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
# base webui import and utils.
#from sd_utils import *
from sd_utils import *
# streamlit imports
#streamlit components section
#other imports
import os, time, requests
import sys
#from fastapi import FastAPI
#import uvicorn
# Temp imports
# end of imports
#---------------------------------------------------------------------------------------------------------------
def layout():
st.info("Under Construction. :construction_worker:")

View File

@ -1,6 +1,6 @@
# This file is part of stable-diffusion-webui (https://github.com/sd-webui/stable-diffusion-webui/). # This file is part of sygil-webui (https://github.com/Sygil-Dev/sygil-webui/).
# Copyright 2022 sd-webui team. # Copyright 2022 Sygil-Dev team.
# This program is free software: you can redistribute it and/or modify # This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as published by # it under the terms of the GNU Affero General Public License as published by
# the Free Software Foundation, either version 3 of the License, or # the Free Software Foundation, either version 3 of the License, or
@ -19,6 +19,9 @@ from sd_utils import *
#other imports #other imports
from requests.auth import HTTPBasicAuth
from requests import HTTPError
from stqdm import stqdm
# Temp imports # Temp imports
@ -33,15 +36,35 @@ def download_file(file_name, file_path, file_url):
print('Downloading ' + file_name + '...') print('Downloading ' + file_name + '...')
# TODO - add progress bar in streamlit # TODO - add progress bar in streamlit
# download file with `requests`` # download file with `requests``
with requests.get(file_url, stream=True) as r: if file_name == "Stable Diffusion v1.5":
if "huggingface_token" not in st.session_state or st.session_state["defaults"].general.huggingface_token == "None":
if "progress_bar_text" in st.session_state:
st.session_state["progress_bar_text"].error(
"You need a huggingface token in order to use the Text to Video tab. Use the Settings page from the sidebar on the left to add your token."
)
raise OSError("You need a huggingface token in order to use the Text to Video tab. Use the Settings page from the sidebar on the left to add your token.")
try:
with requests.get(file_url, auth = HTTPBasicAuth('token', st.session_state.defaults.general.huggingface_token) if "huggingface.co" in file_url else None, stream=True) as r:
r.raise_for_status() r.raise_for_status()
with open(os.path.join(file_path, file_name), 'wb') as f: with open(os.path.join(file_path, file_name), 'wb') as f:
for chunk in r.iter_content(chunk_size=8192): for chunk in stqdm(r.iter_content(chunk_size=8192), backend=True, unit="kb"):
f.write(chunk) f.write(chunk)
except HTTPError as e:
if "huggingface.co" in file_url:
if "resolve"in file_url:
repo_url = file_url.split("resolve")[0]
st.session_state["progress_bar_text"].error(
f"You need to accept the license for the model in order to be able to download it. "
f"Please visit {repo_url} and accept the lincense there, then try again to download the model.")
logger.error(e)
else: else:
print(file_name + ' already exists.') print(file_name + ' already exists.')
def download_model(models, model_name): def download_model(models, model_name):
""" Download all files from model_list[model_name] """ """ Download all files from model_list[model_name] """
for file in models[model_name]: for file in models[model_name]:
@ -52,8 +75,8 @@ def download_model(models, model_name):
def layout(): def layout():
#search = st.text_input(label="Search", placeholder="Type the name of the model you want to search for.", help="") #search = st.text_input(label="Search", placeholder="Type the name of the model you want to search for.", help="")
colms = st.columns((1, 3, 5, 5)) colms = st.columns((1, 3, 3, 5, 5))
columns = ["",'Model Name','Save Location','Download Link'] columns = ["", 'Model Name', 'Save Location', "Download", 'Download Link']
models = st.session_state["defaults"].model_manager.models models = st.session_state["defaults"].model_manager.models
@ -62,7 +85,7 @@ def layout():
col.write(field_name) col.write(field_name)
for x, model_name in enumerate(models): for x, model_name in enumerate(models):
col1, col2, col3, col4 = st.columns((1, 3, 4, 6)) col1, col2, col3, col4, col5 = st.columns((1, 3, 3, 3, 6))
col1.write(x) # index col1.write(x) # index
col2.write(models[model_name]['model_name']) col2.write(models[model_name]['model_name'])
col3.write(models[model_name]['save_location']) col3.write(models[model_name]['save_location'])
@ -88,7 +111,10 @@ def layout():
download_file(models[model_name]['files'][file]['file_name'], models[model_name]['files'][file]['save_location'], models[model_name]['files'][file]['download_link']) download_file(models[model_name]['files'][file]['file_name'], models[model_name]['files'][file]['save_location'], models[model_name]['files'][file]['download_link'])
else: else:
download_file(models[model_name]['files'][file]['file_name'], models[model_name]['save_location'], models[model_name]['files'][file]['download_link']) download_file(models[model_name]['files'][file]['file_name'], models[model_name]['save_location'], models[model_name]['files'][file]['download_link'])
st.experimental_rerun()
else: else:
st.empty() st.empty()
else: else:
st.write('') st.write('')
#

View File

@ -1,6 +1,6 @@
# This file is part of stable-diffusion-webui (https://github.com/sd-webui/stable-diffusion-webui/). # This file is part of sygil-webui (https://github.com/Sygil-Dev/sygil-webui/).
# Copyright 2022 sd-webui team. # Copyright 2022 Sygil-Dev team.
# This program is free software: you can redistribute it and/or modify # This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as published by # it under the terms of the GNU Affero General Public License as published by
# the Free Software Foundation, either version 3 of the License, or # the Free Software Foundation, either version 3 of the License, or
@ -28,9 +28,9 @@ from omegaconf import OmegaConf
# end of imports # end of imports
# --------------------------------------------------------------------------------------------------------------- # ---------------------------------------------------------------------------------------------------------------
@logger.catch(reraise=True)
def layout(): def layout():
st.header("Settings") #st.header("Settings")
with st.form("Settings"): with st.form("Settings"):
general_tab, txt2img_tab, img2img_tab, img2txt_tab, txt2vid_tab, image_processing, textual_inversion_tab, concepts_library_tab = st.tabs( general_tab, txt2img_tab, img2img_tab, img2txt_tab, txt2vid_tab, image_processing, textual_inversion_tab, concepts_library_tab = st.tabs(
@ -61,7 +61,7 @@ def layout():
custom_models_available() custom_models_available()
if server_state["CustomModel_available"]: if server_state["CustomModel_available"]:
st.session_state.default_model = st.selectbox("Default Model:", server_state["custom_models"], st.session_state.defaults.general.default_model = st.selectbox("Default Model:", server_state["custom_models"],
index=server_state["custom_models"].index(st.session_state['defaults'].general.default_model), index=server_state["custom_models"].index(st.session_state['defaults'].general.default_model),
help="Select the model you want to use. If you have placed custom models \ help="Select the model you want to use. If you have placed custom models \
on your 'models/custom' folder they will be shown here as well. The model name that will be shown here \ on your 'models/custom' folder they will be shown here as well. The model name that will be shown here \
@ -69,7 +69,7 @@ def layout():
it is recommended to give the .ckpt file a name that \ it is recommended to give the .ckpt file a name that \
will make it easier for you to distinguish it from other models. Default: Stable Diffusion v1.4") will make it easier for you to distinguish it from other models. Default: Stable Diffusion v1.4")
else: else:
st.session_state.default_model = st.selectbox("Default Model:", [st.session_state['defaults'].general.default_model], st.session_state.defaults.general.default_model = st.selectbox("Default Model:", [st.session_state['defaults'].general.default_model],
help="Select the model you want to use. If you have placed custom models \ help="Select the model you want to use. If you have placed custom models \
on your 'models/custom' folder they will be shown here as well. \ on your 'models/custom' folder they will be shown here as well. \
The model name that will be shown here is the same as the name\ The model name that will be shown here is the same as the name\
@ -159,7 +159,7 @@ def layout():
# Default: True") # Default: True")
st.session_state["defaults"].general.update_preview = True st.session_state["defaults"].general.update_preview = True
st.session_state["defaults"].general.update_preview_frequency = st.number_input("Update Preview Frequency", st.session_state["defaults"].general.update_preview_frequency = st.number_input("Update Preview Frequency",
min_value=1, min_value=0,
value=st.session_state['defaults'].general.update_preview_frequency, value=st.session_state['defaults'].general.update_preview_frequency,
help="Specify the frequency at which the image is updated in steps, this is helpful to reduce the \ help="Specify the frequency at which the image is updated in steps, this is helpful to reduce the \
negative effect updating the preview image has on performance. Default: 10") negative effect updating the preview image has on performance. Default: 10")
@ -181,15 +181,17 @@ def layout():
st.session_state["defaults"].general.save_metadata = st.checkbox("Save Metadata", value=st.session_state['defaults'].general.save_metadata, st.session_state["defaults"].general.save_metadata = st.checkbox("Save Metadata", value=st.session_state['defaults'].general.save_metadata,
help="Save metadata on the output image. Default: True") help="Save metadata on the output image. Default: True")
save_format_list = ["png"] save_format_list = ["png","jpg", "jpeg","webp"]
st.session_state["defaults"].general.save_format = st.selectbox("Save Format", save_format_list, index=save_format_list.index(st.session_state['defaults'].general.save_format), st.session_state["defaults"].general.save_format = st.selectbox("Save Format", save_format_list, index=save_format_list.index(st.session_state['defaults'].general.save_format),
help="Format that will be used whens saving the output images. Default: 'png'") help="Format that will be used whens saving the output images. Default: 'png'")
st.session_state["defaults"].general.skip_grid = st.checkbox("Skip Grid", value=st.session_state['defaults'].general.skip_grid, st.session_state["defaults"].general.skip_grid = st.checkbox("Skip Grid", value=st.session_state['defaults'].general.skip_grid,
help="Skip saving the grid output image. Default: False") help="Skip saving the grid output image. Default: False")
if not st.session_state["defaults"].general.skip_grid: if not st.session_state["defaults"].general.skip_grid:
st.session_state["defaults"].general.grid_format = st.text_input("Grid Format", value=st.session_state['defaults'].general.grid_format,
help="Format for saving the grid output image. Default: 'jpg:95'")
st.session_state["defaults"].general.grid_quality = st.number_input("Grid Quality", value=st.session_state['defaults'].general.grid_quality,
help="Format for saving the grid output image. Default: 95")
st.session_state["defaults"].general.skip_save = st.checkbox("Skip Save", value=st.session_state['defaults'].general.skip_save, st.session_state["defaults"].general.skip_save = st.checkbox("Skip Save", value=st.session_state['defaults'].general.skip_save,
help="Skip saving the output image. Default: False") help="Skip saving the output image. Default: False")
@ -325,7 +327,7 @@ def layout():
st.session_state["defaults"].txt2img.update_preview = True st.session_state["defaults"].txt2img.update_preview = True
st.session_state["defaults"].txt2img.update_preview_frequency = st.number_input("Preview Image Update Frequency", st.session_state["defaults"].txt2img.update_preview_frequency = st.number_input("Preview Image Update Frequency",
min_value=1, min_value=0,
value=st.session_state['defaults'].txt2img.update_preview_frequency, value=st.session_state['defaults'].txt2img.update_preview_frequency,
help="Set the default value for the frrquency of the preview image updates. Default is: 10") help="Set the default value for the frrquency of the preview image updates. Default is: 10")
@ -518,7 +520,7 @@ def layout():
st.session_state["defaults"].img2img.update_preview = True st.session_state["defaults"].img2img.update_preview = True
st.session_state["defaults"].img2img.update_preview_frequency = st.number_input("Img2Img Preview Image Update Frequency", st.session_state["defaults"].img2img.update_preview_frequency = st.number_input("Img2Img Preview Image Update Frequency",
min_value=1, min_value=0,
value=st.session_state['defaults'].img2img.update_preview_frequency, value=st.session_state['defaults'].img2img.update_preview_frequency,
help="Set the default value for the frrquency of the preview image updates. Default is: 10") help="Set the default value for the frrquency of the preview image updates. Default is: 10")
@ -684,8 +686,8 @@ def layout():
st.session_state["defaults"].txt2vid.do_loop = st.checkbox("Loop Generations", value=st.session_state['defaults'].txt2vid.do_loop, st.session_state["defaults"].txt2vid.do_loop = st.checkbox("Loop Generations", value=st.session_state['defaults'].txt2vid.do_loop,
help="Choose to loop or something, IDK.... Default: False") help="Choose to loop or something, IDK.... Default: False")
st.session_state["defaults"].txt2vid.max_frames = st.number_input("Txt2Vid Max Video Frames", value=st.session_state['defaults'].txt2vid.max_frames, st.session_state["defaults"].txt2vid.max_duration_in_seconds = st.number_input("Txt2Vid Max Duration in Seconds", value=st.session_state['defaults'].txt2vid.max_duration_in_seconds,
help="Set the default value for the number of video frames generated. Default is: 100") help="Set the default value for the max duration in seconds for the video generated. Default is: 30")
st.session_state["defaults"].txt2vid.write_info_files = st.checkbox("Write Info Files For txt2vid Images", value=st.session_state['defaults'].txt2vid.write_info_files, st.session_state["defaults"].txt2vid.write_info_files = st.checkbox("Write Info Files For txt2vid Images", value=st.session_state['defaults'].txt2vid.write_info_files,
help="Choose to write the info files along with the generated images. Default: True") help="Choose to write the info files along with the generated images. Default: True")

View File

@ -0,0 +1,11 @@
import os
import streamlit.components.v1 as components
def load(pixel_per_step = 50):
parent_dir = os.path.dirname(os.path.abspath(__file__))
file = os.path.join(parent_dir, "main.js")
with open(file) as f:
javascript_main = f.read()
javascript_main = javascript_main.replace("%%pixelPerStep%%",str(pixel_per_step))
components.html(f"<script>{javascript_main}</script>")

View File

@ -0,0 +1,192 @@
// iframe parent
var parentDoc = window.parent.document
// check for mouse pointer locking support, not a requirement but improves the overall experience
var havePointerLock = 'pointerLockElement' in parentDoc ||
'mozPointerLockElement' in parentDoc ||
'webkitPointerLockElement' in parentDoc;
// the pointer locking exit function
parentDoc.exitPointerLock = parentDoc.exitPointerLock || parentDoc.mozExitPointerLock || parentDoc.webkitExitPointerLock;
// how far should the mouse travel for a step in pixel
var pixelPerStep = %%pixelPerStep%%;
// how many steps did the mouse move in as float
var movementDelta = 0.0;
// value when drag started
var lockedValue = 0.0;
// minimum value from field
var lockedMin = 0.0;
// maximum value from field
var lockedMax = 0.0;
// how big should the field steps be
var lockedStep = 0.0;
// the currently locked in field
var lockedField = null;
// lock box to just request pointer lock for one element
var lockBox = document.createElement("div");
lockBox.classList.add("lockbox");
parentDoc.body.appendChild(lockBox);
lockBox.requestPointerLock = lockBox.requestPointerLock || lockBox.mozRequestPointerLock || lockBox.webkitRequestPointerLock;
function Lock(field)
{
var rect = field.getBoundingClientRect();
lockBox.style.left = (rect.left-2.5)+"px";
lockBox.style.top = (rect.top-2.5)+"px";
lockBox.style.width = (rect.width+2.5)+"px";
lockBox.style.height = (rect.height+5)+"px";
lockBox.requestPointerLock();
}
function Unlock()
{
parentDoc.exitPointerLock();
lockBox.style.left = "0px";
lockBox.style.top = "0px";
lockBox.style.width = "0px";
lockBox.style.height = "0px";
lockedField.focus();
}
parentDoc.addEventListener('mousedown', (e) => {
// if middle is down
if(e.button === 1)
{
if(e.target.tagName === 'INPUT' && e.target.type === 'number')
{
e.preventDefault();
var field = e.target;
if(havePointerLock)
Lock(field);
// save current field
lockedField = e.target;
// add class for styling
lockedField.classList.add("value-dragging");
// reset movement delta
movementDelta = 0.0;
// set to 0 if field is empty
if(lockedField.value === '')
lockedField.value = 0.0;
// save current field value
lockedValue = parseFloat(lockedField.value);
if(lockedField.min === '' || lockedField.min === '-Infinity')
lockedMin = -99999999.0;
else
lockedMin = parseFloat(lockedField.min);
if(lockedField.max === '' || lockedField.max === 'Infinity')
lockedMax = 99999999.0;
else
lockedMax = parseFloat(lockedField.max);
if(lockedField.step === '' || lockedField.step === 'Infinity')
lockedStep = 1.0;
else
lockedStep = parseFloat(lockedField.step);
// lock pointer if available
if(havePointerLock)
Lock(lockedField);
// add drag event
parentDoc.addEventListener("mousemove", onDrag, false);
}
}
});
function onDrag(e)
{
if(lockedField !== null)
{
// add movement to delta
movementDelta += e.movementX / pixelPerStep;
if(lockedField === NaN)
return;
// set new value
let value = lockedValue + Math.floor(Math.abs(movementDelta)) * lockedStep * Math.sign(movementDelta);
lockedField.focus();
lockedField.select();
parentDoc.execCommand('insertText', false /*no UI*/, Math.min(Math.max(value, lockedMin), lockedMax));
}
}
parentDoc.addEventListener('mouseup', (e) => {
// if mouse is up
if(e.button === 1)
{
// release pointer lock if available
if(havePointerLock)
Unlock();
if(lockedField !== null && lockedField !== NaN)
{
// stop drag event
parentDoc.removeEventListener("mousemove", onDrag, false);
// remove class for styling
lockedField.classList.remove("value-dragging");
// remove reference
lockedField = null;
}
}
});
// only execute once (even though multiple iframes exist)
if(!parentDoc.hasOwnProperty("dragableInitialized"))
{
var parentCSS =
`
/* Make input-instruction not block mouse events */
.input-instructions,.input-instructions > *{
pointer-events: none;
user-select: none;
-moz-user-select: none;
-khtml-user-select: none;
-webkit-user-select: none;
-o-user-select: none;
}
.lockbox {
background-color: transparent;
position: absolute;
pointer-events: none;
user-select: none;
-moz-user-select: none;
-khtml-user-select: none;
-webkit-user-select: none;
-o-user-select: none;
border-left: dotted 2px rgb(255,75,75);
border-top: dotted 2px rgb(255,75,75);
border-bottom: dotted 2px rgb(255,75,75);
border-right: dotted 1px rgba(255,75,75,0.2);
border-top-left-radius: 0.25rem;
border-bottom-left-radius: 0.25rem;
z-index: 1000;
}
`;
// get parent document head
var head = parentDoc.getElementsByTagName('head')[0];
// add style tag
var s = document.createElement('style');
// set type attribute
s.setAttribute('type', 'text/css');
// add css forwarded from python
if (s.styleSheet) { // IE
s.styleSheet.cssText = parentCSS;
} else { // the world
s.appendChild(document.createTextNode(parentCSS));
}
// add style to head
head.appendChild(s);
// set flag so this only runs once
parentDoc["dragableInitialized"] = true;
}

View File

@ -0,0 +1,46 @@
import os
from collections import defaultdict
import streamlit.components.v1 as components
# where to save the downloaded key_phrases
key_phrases_file = "data/tags/key_phrases.json"
# the loaded key phrase json as text
key_phrases_json = ""
# where to save the downloaded key_phrases
thumbnails_file = "data/tags/thumbnails.json"
# the loaded key phrase json as text
thumbnails_json = ""
def init():
global key_phrases_json, thumbnails_json
with open(key_phrases_file) as f:
key_phrases_json = f.read()
with open(thumbnails_file) as f:
thumbnails_json = f.read()
def suggestion_area(placeholder):
# get component path
parent_dir = os.path.dirname(os.path.abspath(__file__))
# get file paths
javascript_file = os.path.join(parent_dir, "main.js")
stylesheet_file = os.path.join(parent_dir, "main.css")
parent_stylesheet_file = os.path.join(parent_dir, "parent.css")
# load file texts
with open(javascript_file) as f:
javascript_main = f.read()
with open(stylesheet_file) as f:
stylesheet_main = f.read()
with open(parent_stylesheet_file) as f:
parent_stylesheet = f.read()
# add suggestion area div box
html = "<div id='scroll_area' class='st-bg'><div id='suggestion_area'>javascript failed</div></div>"
# add loaded style
html += f"<style>{stylesheet_main}</style>"
# set default variables
html += f"<script>var thumbnails = {thumbnails_json};\nvar keyPhrases = {key_phrases_json};\nvar parentCSS = `{parent_stylesheet}`;\nvar placeholder='{placeholder}';</script>"
# add main java script
html += f"\n<script>{javascript_main}</script>"
# add component to site
components.html(html, width=None, height=None, scrolling=True)

View File

@ -0,0 +1,81 @@
*
{
padding: 0px;
margin: 0px;
user-select: none;
-moz-user-select: none;
-khtml-user-select: none;
-webkit-user-select: none;
-o-user-select: none;
}
body
{
width: 100%;
height: 100%;
padding-left: calc( 1em - 1px );
padding-top: calc( 1em - 1px );
overflow: hidden;
}
/* width */
::-webkit-scrollbar {
width: 7px;
}
/* Track */
::-webkit-scrollbar-track {
background: rgb(10, 13, 19);
}
/* Handle */
::-webkit-scrollbar-thumb {
background: #6c6e72;
border-radius: 3px;
}
/* Handle on hover */
::-webkit-scrollbar-thumb:hover {
background: #6c6e72;
}
#scroll_area
{
display: flex;
overflow-x: hidden;
overflow-y: auto;
}
#suggestion_area
{
overflow-x: hidden;
width: calc( 100% - 2em - 2px );
margin-bottom: calc( 1em + 13px );
min-height: 50px;
}
span
{
border: 1px solid rgba(250, 250, 250, 0.2);
border-radius: 0.25rem;
font-size: 1rem;
font-family: "Source Sans Pro", sans-serif;
background-color: rgb(38, 39, 48);
color: white;
display: inline-block;
padding: 0.5rem;
margin-right: 3px;
cursor: pointer;
user-select: none;
-moz-user-select: none;
-khtml-user-select: none;
-webkit-user-select: none;
-o-user-select: none;
}
span:hover
{
color: rgb(255,75,75);
border-color: rgb(255,75,75);
}

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,84 @@
.suggestion-frame
{
position: absolute;
/* make as small as possible */
margin: 0px;
padding: 0px;
min-height: 0px;
line-height: 0;
/* animate transitions of the height property */
-webkit-transition: height 1s;
-moz-transition: height 1s;
-ms-transition: height 1s;
-o-transition: height 1s;
transition: height 1s, border-bottom-width 1s;
/* block selection */
user-select: none;
-moz-user-select: none;
-khtml-user-select: none;
-webkit-user-select: none;
-o-user-select: none;
z-index: 700;
outline: 1px solid rgba(250, 250, 250, 0.2);
outline-offset: 0px;
border-radius: 0.25rem;
background: rgb(14, 17, 23);
box-sizing: border-box;
-moz-box-sizing: border-box;
-webkit-box-sizing: border-box;
border-bottom: solid 13px rgb(14, 17, 23) !important;
border-left: solid 13px rgb(14, 17, 23) !important;
}
#phrase-tooltip
{
display: none;
pointer-events: none;
position: absolute;
border-bottom-left-radius: 0.5rem;
border-top-right-radius: 0.5rem;
border-bottom-right-radius: 0.5rem;
border: solid rgb(255,75,75) 2px;
background-color: rgb(38, 39, 48);
color: rgb(255,75,75);
font-size: 1rem;
font-family: "Source Sans Pro", sans-serif;
padding: 0.5rem;
cursor: default;
user-select: none;
-moz-user-select: none;
-khtml-user-select: none;
-webkit-user-select: none;
-o-user-select: none;
z-index: 1000;
}
#phrase-tooltip:has(img)
{
transform: scale(1.25, 1.25);
-ms-transform: scale(1.25, 1.25);
-webkit-transform: scale(1.25, 1.25);
}
#phrase-tooltip>img
{
pointer-events: none;
border-bottom-left-radius: 0.5rem;
border-top-right-radius: 0.5rem;
border-bottom-right-radius: 0.5rem;
cursor: default;
user-select: none;
-moz-user-select: none;
-khtml-user-select: none;
-webkit-user-select: none;
-o-user-select: none;
z-index: 1500;
}

View File

@ -1,6 +1,6 @@
# This file is part of stable-diffusion-webui (https://github.com/sd-webui/stable-diffusion-webui/). # This file is part of sygil-webui (https://github.com/Sygil-Dev/sygil-webui/).
# Copyright 2022 sd-webui team. # Copyright 2022 Sygil-Dev team.
# This program is free software: you can redistribute it and/or modify # This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as published by # it under the terms of the GNU Affero General Public License as published by
# the Free Software Foundation, either version 3 of the License, or # the Free Software Foundation, either version 3 of the License, or

View File

@ -1,6 +1,6 @@
# This file is part of stable-diffusion-webui (https://github.com/sd-webui/stable-diffusion-webui/). # This file is part of sygil-webui (https://github.com/Sygil-Dev/sygil-webui/).
# Copyright 2022 sd-webui team. # Copyright 2022 Sygil-Dev team.
# This program is free software: you can redistribute it and/or modify # This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as published by # it under the terms of the GNU Affero General Public License as published by
# the Free Software Foundation, either version 3 of the License, or # the Free Software Foundation, either version 3 of the License, or

View File

@ -1,6 +1,6 @@
# This file is part of stable-diffusion-webui (https://github.com/sd-webui/stable-diffusion-webui/). # This file is part of sygil-webui (https://github.com/Sygil-Dev/sygil-webui/).
# Copyright 2022 sd-webui team. # Copyright 2022 Sygil-Dev team.
# This program is free software: you can redistribute it and/or modify # This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as published by # it under the terms of the GNU Affero General Public License as published by
# the Free Software Foundation, either version 3 of the License, or # the Free Software Foundation, either version 3 of the License, or
@ -30,12 +30,18 @@ import torch
import skimage import skimage
from ldm.models.diffusion.ddim import DDIMSampler from ldm.models.diffusion.ddim import DDIMSampler
from ldm.models.diffusion.plms import PLMSSampler from ldm.models.diffusion.plms import PLMSSampler
# streamlit components
from custom_components import sygil_suggestions
from streamlit_drawable_canvas import st_canvas
# Temp imports # Temp imports
# end of imports # end of imports
#--------------------------------------------------------------------------------------------------------------- #---------------------------------------------------------------------------------------------------------------
sygil_suggestions.init()
try: try:
# this silences the annoying "Some weights of the model checkpoint were not used when initializing..." message at start. # this silences the annoying "Some weights of the model checkpoint were not used when initializing..." message at start.
@ -365,7 +371,9 @@ def layout():
img2img_input_col, img2img_generate_col = st.columns([10,1]) img2img_input_col, img2img_generate_col = st.columns([10,1])
with img2img_input_col: with img2img_input_col:
#prompt = st.text_area("Input Text","") #prompt = st.text_area("Input Text","")
prompt = st.text_area("Input Text","", placeholder="A corgi wearing a top hat as an oil painting.") placeholder = "A corgi wearing a top hat as an oil painting."
prompt = st.text_area("Input Text","", placeholder=placeholder, height=54)
sygil_suggestions.suggestion_area(placeholder)
# Every form must have a submit button, the extra blank spaces is a temp way to align it with the input field. Needs to be done in CSS or some other way. # Every form must have a submit button, the extra blank spaces is a temp way to align it with the input field. Needs to be done in CSS or some other way.
img2img_generate_col.write("") img2img_generate_col.write("")
@ -374,7 +382,7 @@ def layout():
# creating the page layout using columns # creating the page layout using columns
col1_img2img_layout, col2_img2img_layout, col3_img2img_layout = st.columns([1,2,2], gap="small") col1_img2img_layout, col2_img2img_layout, col3_img2img_layout = st.columns([2,4,4], gap="medium")
with col1_img2img_layout: with col1_img2img_layout:
# If we have custom models available on the "models/custom" # If we have custom models available on the "models/custom"
@ -386,9 +394,9 @@ def layout():
help="Select the model you want to use. This option is only available if you have custom models \ help="Select the model you want to use. This option is only available if you have custom models \
on your 'models/custom' folder. The model name that will be shown here is the same as the name\ on your 'models/custom' folder. The model name that will be shown here is the same as the name\
the file for the model has on said folder, it is recommended to give the .ckpt file a name that \ the file for the model has on said folder, it is recommended to give the .ckpt file a name that \
will make it easier for you to distinguish it from other models. Default: Stable Diffusion v1.4") will make it easier for you to distinguish it from other models. Default: Stable Diffusion v1.5")
else: else:
st.session_state["custom_model"] = "Stable Diffusion v1.4" st.session_state["custom_model"] = "Stable Diffusion v1.5"
st.session_state["sampling_steps"] = st.number_input("Sampling Steps", value=st.session_state['defaults'].img2img.sampling_steps.value, st.session_state["sampling_steps"] = st.number_input("Sampling Steps", value=st.session_state['defaults'].img2img.sampling_steps.value,
@ -406,6 +414,7 @@ def layout():
seed = st.text_input("Seed:", value=st.session_state['defaults'].img2img.seed, help=" The seed to use, if left blank a random seed will be generated.") seed = st.text_input("Seed:", value=st.session_state['defaults'].img2img.seed, help=" The seed to use, if left blank a random seed will be generated.")
cfg_scale = st.number_input("CFG (Classifier Free Guidance Scale):", min_value=st.session_state['defaults'].img2img.cfg_scale.min_value, cfg_scale = st.number_input("CFG (Classifier Free Guidance Scale):", min_value=st.session_state['defaults'].img2img.cfg_scale.min_value,
value=st.session_state['defaults'].img2img.cfg_scale.value,
step=st.session_state['defaults'].img2img.cfg_scale.step, step=st.session_state['defaults'].img2img.cfg_scale.step,
help="How strongly the image should follow the prompt.") help="How strongly the image should follow the prompt.")
@ -418,7 +427,7 @@ def layout():
mask_expander = st.empty() mask_expander = st.empty()
with mask_expander.expander("Mask"): with mask_expander.expander("Mask"):
mask_mode_list = ["Mask", "Inverted mask", "Image alpha"] mask_mode_list = ["Mask", "Inverted mask", "Image alpha"]
mask_mode = st.selectbox("Mask Mode", mask_mode_list, mask_mode = st.selectbox("Mask Mode", mask_mode_list, index=st.session_state["defaults"].img2img.mask_mode,
help="Select how you want your image to be masked.\"Mask\" modifies the image where the mask is white.\n\ help="Select how you want your image to be masked.\"Mask\" modifies the image where the mask is white.\n\
\"Inverted mask\" modifies the image where the mask is black. \"Image alpha\" modifies the image where the image is transparent." \"Inverted mask\" modifies the image where the mask is black. \"Image alpha\" modifies the image where the image is transparent."
) )
@ -426,15 +435,32 @@ def layout():
noise_mode_list = ["Seed", "Find Noise", "Matched Noise", "Find+Matched Noise"] noise_mode_list = ["Seed", "Find Noise", "Matched Noise", "Find+Matched Noise"]
noise_mode = st.selectbox( noise_mode = st.selectbox("Noise Mode", noise_mode_list, index=noise_mode_list.index(st.session_state['defaults'].img2img.noise_mode), help="")
"Noise Mode", noise_mode_list, #noise_mode = noise_mode_list.index(noise_mode)
help=""
)
noise_mode = noise_mode_list.index(noise_mode)
find_noise_steps = st.number_input("Find Noise Steps", value=st.session_state['defaults'].img2img.find_noise_steps.value, find_noise_steps = st.number_input("Find Noise Steps", value=st.session_state['defaults'].img2img.find_noise_steps.value,
min_value=st.session_state['defaults'].img2img.find_noise_steps.min_value, min_value=st.session_state['defaults'].img2img.find_noise_steps.min_value,
step=st.session_state['defaults'].img2img.find_noise_steps.step) step=st.session_state['defaults'].img2img.find_noise_steps.step)
# Specify canvas parameters in application
drawing_mode = st.selectbox(
"Drawing tool:",
(
"freedraw",
"transform",
#"line",
"rect",
"circle",
#"polygon",
),
)
stroke_width = st.slider("Stroke width: ", 1, 100, 50)
stroke_color = st.color_picker("Stroke color hex: ", value="#EEEEEE")
bg_color = st.color_picker("Background color hex: ", "#7B6E6E")
display_toolbar = st.checkbox("Display toolbar", True)
#realtime_update = st.checkbox("Update in realtime", True)
with st.expander("Batch Options"): with st.expander("Batch Options"):
st.session_state["batch_count"] = st.number_input("Batch count.", value=st.session_state['defaults'].img2img.batch_count.value, st.session_state["batch_count"] = st.number_input("Batch count.", value=st.session_state['defaults'].img2img.batch_count.value,
help="How many iterations or batches of images to generate in total.") help="How many iterations or batches of images to generate in total.")
@ -447,7 +473,7 @@ def layout():
with st.expander("Preview Settings"): with st.expander("Preview Settings"):
st.session_state["update_preview"] = st.session_state["defaults"].general.update_preview st.session_state["update_preview"] = st.session_state["defaults"].general.update_preview
st.session_state["update_preview_frequency"] = st.number_input("Update Image Preview Frequency", st.session_state["update_preview_frequency"] = st.number_input("Update Image Preview Frequency",
min_value=1, min_value=0,
value=st.session_state['defaults'].img2img.update_preview_frequency, value=st.session_state['defaults'].img2img.update_preview_frequency,
help="Frequency in steps at which the the preview image is updated. By default the frequency \ help="Frequency in steps at which the the preview image is updated. By default the frequency \
is set to 1 step.") is set to 1 step.")
@ -575,55 +601,63 @@ def layout():
editor_image = st.empty() editor_image = st.empty()
st.session_state["editor_image"] = editor_image st.session_state["editor_image"] = editor_image
st.form_submit_button("Refresh")
#if "canvas" not in st.session_state:
st.session_state["canvas"] = st.empty()
masked_image_holder = st.empty() masked_image_holder = st.empty()
image_holder = st.empty() image_holder = st.empty()
st.form_submit_button("Refresh")
uploaded_images = st.file_uploader( uploaded_images = st.file_uploader(
"Upload Image", accept_multiple_files=False, type=["png", "jpg", "jpeg", "webp"], "Upload Image", accept_multiple_files=False, type=["png", "jpg", "jpeg", "webp", 'jfif'],
help="Upload an image which will be used for the image to image generation.", help="Upload an image which will be used for the image to image generation.",
) )
if uploaded_images: if uploaded_images:
image = Image.open(uploaded_images).convert('RGBA') image = Image.open(uploaded_images).convert('RGB')
new_img = image.resize((width, height)) new_img = image.resize((width, height))
image_holder.image(new_img) #image_holder.image(new_img)
mask_holder = st.empty() #mask_holder = st.empty()
uploaded_masks = st.file_uploader( #uploaded_masks = st.file_uploader(
"Upload Mask", accept_multiple_files=False, type=["png", "jpg", "jpeg", "webp"], #"Upload Mask", accept_multiple_files=False, type=["png", "jpg", "jpeg", "webp", 'jfif'],
help="Upload an mask image which will be used for masking the image to image generation.", #help="Upload an mask image which will be used for masking the image to image generation.",
#)
#
# Create a canvas component
with st.session_state["canvas"]:
st.session_state["uploaded_masks"] = st_canvas(
fill_color="rgba(255, 165, 0, 0.3)", # Fixed fill color with some opacity
stroke_width=stroke_width,
stroke_color=stroke_color,
background_color=bg_color,
background_image=image if uploaded_images else None,
update_streamlit=True,
width=width,
height=height,
drawing_mode=drawing_mode,
initial_drawing=st.session_state["uploaded_masks"].json_data if "uploaded_masks" in st.session_state else None,
display_toolbar= display_toolbar,
key="full_app",
) )
if uploaded_masks:
mask_expander.expander("Mask", expanded=True)
mask = Image.open(uploaded_masks)
if mask.mode == "RGBA":
mask = mask.convert('RGBA')
background = Image.new('RGBA', mask.size, (0, 0, 0))
mask = Image.alpha_composite(background, mask)
mask = mask.resize((width, height))
mask_holder.image(mask)
if uploaded_images and uploaded_masks: #try:
if mask_mode != 2: ##print (type(st.session_state["uploaded_masks"]))
final_img = new_img.copy() #if st.session_state["uploaded_masks"] != None:
alpha_layer = mask.convert('L') #mask_expander.expander("Mask", expanded=True)
strength = st.session_state["denoising_strength"] #mask = Image.fromarray(st.session_state["uploaded_masks"].image_data)
if mask_mode == 0:
alpha_layer = ImageOps.invert(alpha_layer)
alpha_layer = alpha_layer.point(lambda a: a * strength)
alpha_layer = ImageOps.invert(alpha_layer)
elif mask_mode == 1:
alpha_layer = alpha_layer.point(lambda a: a * strength)
alpha_layer = ImageOps.invert(alpha_layer)
final_img.putalpha(alpha_layer) #st.image(mask)
with masked_image_holder.container():
st.text("Masked Image Preview")
st.image(final_img)
#if mask.mode == "RGBA":
#mask = mask.convert('RGBA')
#background = Image.new('RGBA', mask.size, (0, 0, 0))
#mask = Image.alpha_composite(background, mask)
#mask = mask.resize((width, height))
#except AttributeError:
#pass
with col3_img2img_layout: with col3_img2img_layout:
result_tab = st.tabs(["Result"]) result_tab = st.tabs(["Result"])
@ -637,7 +671,6 @@ def layout():
st.session_state["progress_bar_text"] = st.empty() st.session_state["progress_bar_text"] = st.empty()
st.session_state["progress_bar"] = st.empty() st.session_state["progress_bar"] = st.empty()
message = st.empty() message = st.empty()
#if uploaded_images: #if uploaded_images:
@ -658,14 +691,17 @@ def layout():
CustomModel_available=server_state["CustomModel_available"], custom_model=st.session_state["custom_model"]) CustomModel_available=server_state["CustomModel_available"], custom_model=st.session_state["custom_model"])
if uploaded_images: if uploaded_images:
image = Image.open(uploaded_images).convert('RGBA') #image = Image.fromarray(image).convert('RGBA')
new_img = image.resize((width, height)) #new_img = image.resize((width, height))
#img_array = np.array(image) # if you want to pass it to OpenCV ###img_array = np.array(image) # if you want to pass it to OpenCV
#image_holder.image(new_img)
new_mask = None new_mask = None
if uploaded_masks:
mask = Image.open(uploaded_masks).convert('RGBA') if st.session_state["uploaded_masks"]:
mask = Image.fromarray(st.session_state["uploaded_masks"].image_data)
new_mask = mask.resize((width, height)) new_mask = mask.resize((width, height))
#masked_image_holder.image(new_mask)
try: try:
output_images, seed, info, stats = img2img(prompt=prompt, init_info=new_img, init_info_mask=new_mask, mask_mode=mask_mode, output_images, seed, info, stats = img2img(prompt=prompt, init_info=new_img, init_info_mask=new_mask, mask_mode=mask_mode,
mask_restore=img2img_mask_restore, ddim_steps=st.session_state["sampling_steps"], mask_restore=img2img_mask_restore, ddim_steps=st.session_state["sampling_steps"],

View File

@ -1,6 +1,6 @@
# This file is part of stable-diffusion-webui (https://github.com/sd-webui/stable-diffusion-webui/). # This file is part of sygil-webui (https://github.com/Sygil-Dev/sygil-webui/).
# Copyright 2022 sd-webui team. # Copyright 2022 Sygil-Dev team.
# This program is free software: you can redistribute it and/or modify # This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as published by # it under the terms of the GNU Affero General Public License as published by
# the Free Software Foundation, either version 3 of the License, or # the Free Software Foundation, either version 3 of the License, or
@ -61,14 +61,14 @@ from ldm.models.blip import blip_decoder
device = torch.device('cuda:0' if torch.cuda.is_available() else 'cpu') device = torch.device('cuda:0' if torch.cuda.is_available() else 'cpu')
blip_image_eval_size = 512 blip_image_eval_size = 512
#blip_model_url = 'https://storage.googleapis.com/sfr-vision-language-research/BLIP/models/model*_base_caption.pth'
server_state["clip_models"] = {}
server_state["preprocesses"] = {}
st.session_state["log"] = [] st.session_state["log"] = []
def load_blip_model(): def load_blip_model():
logger.info("Loading BLIP Model") logger.info("Loading BLIP Model")
if "log" not in st.session_state:
st.session_state["log"] = []
st.session_state["log"].append("Loading BLIP Model") st.session_state["log"].append("Loading BLIP Model")
st.session_state["log_message"].code('\n'.join(st.session_state["log"]), language='') st.session_state["log_message"].code('\n'.join(st.session_state["log"]), language='')
@ -79,7 +79,6 @@ def load_blip_model():
server_state["blip_model"] = server_state["blip_model"].eval() server_state["blip_model"] = server_state["blip_model"].eval()
#if not st.session_state["defaults"].general.optimized:
server_state["blip_model"] = server_state["blip_model"].to(device).half() server_state["blip_model"] = server_state["blip_model"].to(device).half()
logger.info("BLIP Model Loaded") logger.info("BLIP Model Loaded")
@ -90,57 +89,6 @@ def load_blip_model():
st.session_state["log"].append("BLIP Model already loaded") st.session_state["log"].append("BLIP Model already loaded")
st.session_state["log_message"].code('\n'.join(st.session_state["log"]), language='') st.session_state["log_message"].code('\n'.join(st.session_state["log"]), language='')
#return server_state["blip_model"]
#
def artstation_links():
"""Find and save every artstation link for the first 500 pages of the explore page."""
# collecting links to the list()
links = []
with open('data/img2txt/artstation_links.txt', 'w') as f:
for page_num in range(1,500):
response = requests.get(f'https://www.artstation.com/api/v2/community/explore/projects/trending.json?page={page_num}&dimension=all&per_page=100').text
# open json response
data = json.loads(response)
# loopinh through json response
for result in data['data']:
# still looping and grabbing url's
url = result['url']
links.append(url)
# writing each link on the new line (\n)
f.write(f'{url}\n')
return links
#
def artstation_users():
"""Get all the usernames and full name of the users on the first 500 pages of artstation explore page."""
# collect username and full name
artists = []
# opening a .txt file
with open('data/img2txt/artstation_artists.txt', 'w') as f:
for page_num in range(1,500):
response = requests.get(f'https://www.artstation.com/api/v2/community/explore/projects/trending.json?page={page_num}&dimension=all&per_page=100').text
# open json response
data = json.loads(response)
# loopinh through json response
for item in data['data']:
#print (item['user'])
username = item['user']['username']
full_name = item['user']['full_name']
# still looping and grabbing url's
artists.append(username)
artists.append(full_name)
# writing each link on the new line (\n)
f.write(f'{slugify(username)}\n')
f.write(f'{slugify(full_name)}\n')
return artists
def generate_caption(pil_image): def generate_caption(pil_image):
@ -155,7 +103,6 @@ def generate_caption(pil_image):
with torch.no_grad(): with torch.no_grad():
caption = server_state["blip_model"].generate(gpu_image, sample=False, num_beams=3, max_length=20, min_length=5) caption = server_state["blip_model"].generate(gpu_image, sample=False, num_beams=3, max_length=20, min_length=5)
#print (caption)
return caption[0] return caption[0]
def load_list(filename): def load_list(filename):
@ -194,8 +141,6 @@ def batch_rank(model, image_features, text_array, batch_size=st.session_state["d
return ranks return ranks
def interrogate(image, models): def interrogate(image, models):
#server_state["blip_model"] =
load_blip_model() load_blip_model()
logger.info("Generating Caption") logger.info("Generating Caption")
@ -216,14 +161,11 @@ def interrogate(image, models):
return return
table = [] table = []
bests = [[('', 0)]]*5 bests = [[('', 0)]]*7
logger.info("Ranking Text") logger.info("Ranking Text")
st.session_state["log"].append("Ranking Text")
#if "clip_model" in server_state: st.session_state["log_message"].code('\n'.join(st.session_state["log"]), language='')
#print (server_state["clip_model"])
#print (st.session_state["log_message"])
for model_name in models: for model_name in models:
with torch.no_grad(), torch.autocast('cuda', dtype=torch.float16): with torch.no_grad(), torch.autocast('cuda', dtype=torch.float16):
@ -242,15 +184,14 @@ def interrogate(image, models):
del server_state["preprocesses"][model] del server_state["preprocesses"][model]
clear_cuda() clear_cuda()
if model_name == 'ViT-H-14': if model_name == 'ViT-H-14':
server_state["clip_models"][model_name], _, server_state["preprocesses"][model_name] = open_clip.create_model_and_transforms(model_name, server_state["clip_models"][model_name], _, server_state["preprocesses"][model_name] = \
pretrained='laion2b_s32b_b79k', open_clip.create_model_and_transforms(model_name, pretrained='laion2b_s32b_b79k', cache_dir='models/clip')
cache_dir='models/clip')
elif model_name == 'ViT-g-14': elif model_name == 'ViT-g-14':
server_state["clip_models"][model_name], _, server_state["preprocesses"][model_name] = open_clip.create_model_and_transforms(model_name, server_state["clip_models"][model_name], _, server_state["preprocesses"][model_name] = \
pretrained='laion2b_s12b_b42k', open_clip.create_model_and_transforms(model_name, pretrained='laion2b_s12b_b42k', cache_dir='models/clip')
cache_dir='models/clip')
else: else:
server_state["clip_models"][model_name], server_state["preprocesses"][model_name] = clip.load(model_name, device=device, download_root='models/clip') server_state["clip_models"][model_name], server_state["preprocesses"][model_name] = \
clip.load(model_name, device=device, download_root='models/clip')
server_state["clip_models"][model_name] = server_state["clip_models"][model_name].cuda().eval() server_state["clip_models"][model_name] = server_state["clip_models"][model_name].cuda().eval()
images = server_state["preprocesses"][model_name](image).unsqueeze(0).cuda() images = server_state["preprocesses"][model_name](image).unsqueeze(0).cuda()
@ -269,9 +210,13 @@ def interrogate(image, models):
ranks.append(batch_rank(server_state["clip_models"][model_name], image_features, server_state["trending_list"])) ranks.append(batch_rank(server_state["clip_models"][model_name], image_features, server_state["trending_list"]))
ranks.append(batch_rank(server_state["clip_models"][model_name], image_features, server_state["movements"])) ranks.append(batch_rank(server_state["clip_models"][model_name], image_features, server_state["movements"]))
ranks.append(batch_rank(server_state["clip_models"][model_name], image_features, server_state["flavors"])) ranks.append(batch_rank(server_state["clip_models"][model_name], image_features, server_state["flavors"]))
#ranks.append(batch_rank(server_state["clip_models"][model_name], image_features, server_state["domains"]))
#ranks.append(batch_rank(server_state["clip_models"][model_name], image_features, server_state["subreddits"]))
ranks.append(batch_rank(server_state["clip_models"][model_name], image_features, server_state["techniques"]))
ranks.append(batch_rank(server_state["clip_models"][model_name], image_features, server_state["tags"]))
# ranks.append(batch_rank(server_state["clip_models"][model_name], image_features, server_state["genres"])) # ranks.append(batch_rank(server_state["clip_models"][model_name], image_features, server_state["genres"]))
# ranks.append(batch_rank(server_state["clip_models"][model_name], image_features, server_state["styles"])) # ranks.append(batch_rank(server_state["clip_models"][model_name], image_features, server_state["styles"]))
# ranks.append(batch_rank(server_state["clip_models"][model_name], image_features, server_state["techniques"]))
# ranks.append(batch_rank(server_state["clip_models"][model_name], image_features, server_state["subjects"])) # ranks.append(batch_rank(server_state["clip_models"][model_name], image_features, server_state["subjects"]))
# ranks.append(batch_rank(server_state["clip_models"][model_name], image_features, server_state["colors"])) # ranks.append(batch_rank(server_state["clip_models"][model_name], image_features, server_state["colors"]))
# ranks.append(batch_rank(server_state["clip_models"][model_name], image_features, server_state["moods"])) # ranks.append(batch_rank(server_state["clip_models"][model_name], image_features, server_state["moods"]))
@ -288,59 +233,53 @@ def interrogate(image, models):
if confidence_sum > sum(bests[i][t][1] for t in range(len(bests[i]))): if confidence_sum > sum(bests[i][t][1] for t in range(len(bests[i]))):
bests[i] = ranks[i] bests[i] = ranks[i]
for best in bests:
best.sort(key=lambda x: x[1], reverse=True)
# prune to 3
best = best[:3]
row = [model_name] row = [model_name]
for r in ranks: for r in ranks:
row.append(', '.join([f"{x[0]} ({x[1]:0.1f}%)" for x in r])) row.append(', '.join([f"{x[0]} ({x[1]:0.1f}%)" for x in r]))
#for rank in ranks:
# rank.sort(key=lambda x: x[1], reverse=True)
# row.append(f'{rank[0][0]} {rank[0][1]:.2f}%')
table.append(row) table.append(row)
if st.session_state["defaults"].general.optimized: if st.session_state["defaults"].general.optimized:
del server_state["clip_models"][model_name] del server_state["clip_models"][model_name]
gc.collect() gc.collect()
# for i in range(len(st.session_state["uploaded_image"])):
st.session_state["prediction_table"][st.session_state["processed_image_count"]].dataframe(pd.DataFrame( st.session_state["prediction_table"][st.session_state["processed_image_count"]].dataframe(pd.DataFrame(
table, columns=["Model", "Medium", "Artist", "Trending", "Movement", "Flavors"])) table, columns=["Model", "Medium", "Artist", "Trending", "Movement", "Flavors", "Techniques", "Tags"]))
flaves = ', '.join([f"{x[0]}" for x in bests[4]])
medium = bests[0][0][0] medium = bests[0][0][0]
artist = bests[1][0][0]
trending = bests[2][0][0]
movement = bests[3][0][0]
flavors = bests[4][0][0]
#domains = bests[5][0][0]
#subreddits = bests[6][0][0]
techniques = bests[5][0][0]
tags = bests[6][0][0]
if caption.startswith(medium): if caption.startswith(medium):
st.session_state["text_result"][st.session_state["processed_image_count"]].code( st.session_state["text_result"][st.session_state["processed_image_count"]].code(
f"\n\n{caption} {bests[1][0][0]}, {bests[2][0][0]}, {bests[3][0][0]}, {flaves}", language="") f"\n\n{caption} {artist}, {trending}, {movement}, {techniques}, {flavors}, {tags}", language="")
else: else:
st.session_state["text_result"][st.session_state["processed_image_count"]].code( st.session_state["text_result"][st.session_state["processed_image_count"]].code(
f"\n\n{caption}, {medium} {bests[1][0][0]}, {bests[2][0][0]}, {bests[3][0][0]}, {flaves}", language="") f"\n\n{caption}, {medium} {artist}, {trending}, {movement}, {techniques}, {flavors}, {tags}", language="")
#
logger.info("Finished Interrogating.") logger.info("Finished Interrogating.")
st.session_state["log"].append("Finished Interrogating.") st.session_state["log"].append("Finished Interrogating.")
st.session_state["log_message"].code('\n'.join(st.session_state["log"]), language='') st.session_state["log_message"].code('\n'.join(st.session_state["log"]), language='')
del st.session_state["log"]
#
def img2txt(): def img2txt():
data_path = "data/"
server_state["artists"] = load_list(os.path.join(data_path, 'img2txt', 'artists.txt'))
server_state["flavors"] = load_list(os.path.join(data_path, 'img2txt', 'flavors.txt'))
server_state["mediums"] = load_list(os.path.join(data_path, 'img2txt', 'mediums.txt'))
server_state["movements"] = load_list(os.path.join(data_path, 'img2txt', 'movements.txt'))
server_state["sites"] = load_list(os.path.join(data_path, 'img2txt', 'sites.txt'))
# server_state["genres"] = load_list(os.path.join(data_path, 'img2txt', 'genres.txt'))
# server_state["styles"] = load_list(os.path.join(data_path, 'img2txt', 'styles.txt'))
# server_state["techniques"] = load_list(os.path.join(data_path, 'img2txt', 'techniques.txt'))
# server_state["subjects"] = load_list(os.path.join(data_path, 'img2txt', 'subjects.txt'))
server_state["trending_list"] = [site for site in server_state["sites"]]
server_state["trending_list"].extend(["trending on "+site for site in server_state["sites"]])
server_state["trending_list"].extend(["featured on "+site for site in server_state["sites"]])
server_state["trending_list"].extend([site+" contest winner" for site in server_state["sites"]])
#image_path_or_url = "https://i.redd.it/e2e8gimigjq91.jpg"
models = [] models = []
if st.session_state["ViT-L/14"]: if st.session_state["ViT-L/14"]:
@ -390,7 +329,36 @@ def img2txt():
def layout(): def layout():
#set_page_title("Image-to-Text - Stable Diffusion WebUI") #set_page_title("Image-to-Text - Stable Diffusion WebUI")
#st.info("Under Construction. :construction_worker:") #st.info("Under Construction. :construction_worker:")
#
if "clip_models" not in server_state:
server_state["clip_models"] = {}
if "preprocesses" not in server_state:
server_state["preprocesses"] = {}
data_path = "data/"
if "artists" not in server_state:
server_state["artists"] = load_list(os.path.join(data_path, 'img2txt', 'artists.txt'))
if "flavors" not in server_state:
server_state["flavors"] = random.choices(load_list(os.path.join(data_path, 'img2txt', 'flavors.txt')), k=2000)
if "mediums" not in server_state:
server_state["mediums"] = load_list(os.path.join(data_path, 'img2txt', 'mediums.txt'))
if "movements" not in server_state:
server_state["movements"] = load_list(os.path.join(data_path, 'img2txt', 'movements.txt'))
if "sites" not in server_state:
server_state["sites"] = load_list(os.path.join(data_path, 'img2txt', 'sites.txt'))
#server_state["domains"] = load_list(os.path.join(data_path, 'img2txt', 'domains.txt'))
#server_state["subreddits"] = load_list(os.path.join(data_path, 'img2txt', 'subreddits.txt'))
if "techniques" not in server_state:
server_state["techniques"] = load_list(os.path.join(data_path, 'img2txt', 'techniques.txt'))
if "tags" not in server_state:
server_state["tags"] = load_list(os.path.join(data_path, 'img2txt', 'tags.txt'))
#server_state["genres"] = load_list(os.path.join(data_path, 'img2txt', 'genres.txt'))
# server_state["styles"] = load_list(os.path.join(data_path, 'img2txt', 'styles.txt'))
# server_state["subjects"] = load_list(os.path.join(data_path, 'img2txt', 'subjects.txt'))
if "trending_list" not in server_state:
server_state["trending_list"] = [site for site in server_state["sites"]]
server_state["trending_list"].extend(["trending on "+site for site in server_state["sites"]])
server_state["trending_list"].extend(["featured on "+site for site in server_state["sites"]])
server_state["trending_list"].extend([site+" contest winner" for site in server_state["sites"]])
with st.form("img2txt-inputs"): with st.form("img2txt-inputs"):
st.session_state["generation_mode"] = "img2txt" st.session_state["generation_mode"] = "img2txt"
@ -402,7 +370,7 @@ def layout():
#url = st.text_area("Input Text","") #url = st.text_area("Input Text","")
#url = st.text_input("Input Text","", placeholder="A corgi wearing a top hat as an oil painting.") #url = st.text_input("Input Text","", placeholder="A corgi wearing a top hat as an oil painting.")
#st.subheader("Input Image") #st.subheader("Input Image")
st.session_state["uploaded_image"] = st.file_uploader('Input Image', type=['png', 'jpg', 'jpeg'], accept_multiple_files=True) st.session_state["uploaded_image"] = st.file_uploader('Input Image', type=['png', 'jpg', 'jpeg', 'jfif'], accept_multiple_files=True)
with st.expander("CLIP models", expanded=True): with st.expander("CLIP models", expanded=True):
st.session_state["ViT-L/14"] = st.checkbox("ViT-L/14", value=True, help="ViT-L/14 model.") st.session_state["ViT-L/14"] = st.checkbox("ViT-L/14", value=True, help="ViT-L/14 model.")
@ -432,7 +400,9 @@ def layout():
with col2: with col2:
st.subheader("Image") st.subheader("Image")
refresh = st.form_submit_button("Refresh", help='Refresh the image preview to show your uploaded image instead of the default placeholder.') image_col1, image_col2 = st.columns([10,25])
with image_col1:
refresh = st.form_submit_button("Update Preview Image", help='Refresh the image preview to show your uploaded image instead of the default placeholder.')
if st.session_state["uploaded_image"]: if st.session_state["uploaded_image"]:
#print (type(st.session_state["uploaded_image"])) #print (type(st.session_state["uploaded_image"]))
@ -471,11 +441,12 @@ def layout():
#st.session_state["input_image_preview"].code('', language="") #st.session_state["input_image_preview"].code('', language="")
st.image("images/streamlit/img2txt_placeholder.png", clamp=True) st.image("images/streamlit/img2txt_placeholder.png", clamp=True)
with image_col2:
# #
# Every form must have a submit button, the extra blank spaces is a temp way to align it with the input field. Needs to be done in CSS or some other way. # Every form must have a submit button, the extra blank spaces is a temp way to align it with the input field. Needs to be done in CSS or some other way.
# generate_col1.title("") # generate_col1.title("")
# generate_col1.title("") # generate_col1.title("")
generate_button = st.form_submit_button("Generate!") generate_button = st.form_submit_button("Generate!", help="Start interrogating the images to generate a prompt from each of the selected images")
if generate_button: if generate_button:
# if model, pipe, RealESRGAN or GFPGAN is in st.session_state remove the model and pipe form session_state so that they are reloaded. # if model, pipe, RealESRGAN or GFPGAN is in st.session_state remove the model and pipe form session_state so that they are reloaded.

View File

@ -1,6 +1,6 @@
# This file is part of stable-diffusion-webui (https://github.com/sd-webui/stable-diffusion-webui/). # This file is part of sygil-webui (https://github.com/Sygil-Dev/sygil-webui/).
# Copyright 2022 sd-webui team. # Copyright 2022 Sygil-Dev team.
# This program is free software: you can redistribute it and/or modify # This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as published by # it under the terms of the GNU Affero General Public License as published by
# the Free Software Foundation, either version 3 of the License, or # the Free Software Foundation, either version 3 of the License, or

View File

@ -1,6 +1,6 @@
# This file is part of stable-diffusion-webui (https://github.com/sd-webui/stable-diffusion-webui/). # This file is part of sygil-webui (https://github.com/Sygil-Dev/sygil-webui/).
# Copyright 2022 sd-webui team. # Copyright 2022 Sygil-Dev team.
# This program is free software: you can redistribute it and/or modify # This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as published by # it under the terms of the GNU Affero General Public License as published by
# the Free Software Foundation, either version 3 of the License, or # the Free Software Foundation, either version 3 of the License, or

View File

@ -0,0 +1,551 @@
import os
import re
import sys
import k_diffusion as K
import tqdm
from contextlib import contextmanager, nullcontext
import skimage
import numpy as np
import PIL
import torch
from einops import rearrange
from ldm.models.diffusion.ddim import DDIMSampler
from ldm.models.diffusion.kdiffusion import CFGMaskedDenoiser, KDiffusionSampler
from ldm.models.diffusion.plms import PLMSSampler
from nataili.util.cache import torch_gc
from nataili.util.check_prompt_length import check_prompt_length
from nataili.util.get_next_sequence_number import get_next_sequence_number
from nataili.util.image_grid import image_grid
from nataili.util.load_learned_embed_in_clip import load_learned_embed_in_clip
from nataili.util.save_sample import save_sample
from nataili.util.seed_to_int import seed_to_int
from slugify import slugify
import PIL
class img2img:
def __init__(self, model, device, output_dir, save_extension='jpg',
output_file_path=False, load_concepts=False, concepts_dir=None,
verify_input=True, auto_cast=True):
self.model = model
self.output_dir = output_dir
self.output_file_path = output_file_path
self.save_extension = save_extension
self.load_concepts = load_concepts
self.concepts_dir = concepts_dir
self.verify_input = verify_input
self.auto_cast = auto_cast
self.device = device
self.comments = []
self.output_images = []
self.info = ''
self.stats = ''
self.images = []
def create_random_tensors(self, shape, seeds):
xs = []
for seed in seeds:
torch.manual_seed(seed)
# randn results depend on device; gpu and cpu get different results for same seed;
# the way I see it, it's better to do this on CPU, so that everyone gets same result;
# but the original script had it like this so i do not dare change it for now because
# it will break everyone's seeds.
xs.append(torch.randn(shape, device=self.device))
x = torch.stack(xs)
return x
def process_prompt_tokens(self, prompt_tokens):
# compviz codebase
tokenizer = self.model.cond_stage_model.tokenizer
text_encoder = self.model.cond_stage_model.transformer
# diffusers codebase
#tokenizer = pipe.tokenizer
#text_encoder = pipe.text_encoder
ext = ('.pt', '.bin')
for token_name in prompt_tokens:
embedding_path = os.path.join(self.concepts_dir, token_name)
if os.path.exists(embedding_path):
for files in os.listdir(embedding_path):
if files.endswith(ext):
load_learned_embed_in_clip(f"{os.path.join(embedding_path, files)}", text_encoder, tokenizer, f"<{token_name}>")
else:
print(f"Concept {token_name} not found in {self.concepts_dir}")
del tokenizer, text_encoder
return
del tokenizer, text_encoder
def resize_image(self, resize_mode, im, width, height):
LANCZOS = (PIL.Image.Resampling.LANCZOS if hasattr(PIL.Image, 'Resampling') else PIL.Image.LANCZOS)
if resize_mode == "resize":
res = im.resize((width, height), resample=LANCZOS)
elif resize_mode == "crop":
ratio = width / height
src_ratio = im.width / im.height
src_w = width if ratio > src_ratio else im.width * height // im.height
src_h = height if ratio <= src_ratio else im.height * width // im.width
resized = im.resize((src_w, src_h), resample=LANCZOS)
res = PIL.Image.new("RGBA", (width, height))
res.paste(resized, box=(width // 2 - src_w // 2, height // 2 - src_h // 2))
else:
ratio = width / height
src_ratio = im.width / im.height
src_w = width if ratio < src_ratio else im.width * height // im.height
src_h = height if ratio >= src_ratio else im.height * width // im.width
resized = im.resize((src_w, src_h), resample=LANCZOS)
res = PIL.Image.new("RGBA", (width, height))
res.paste(resized, box=(width // 2 - src_w // 2, height // 2 - src_h // 2))
if ratio < src_ratio:
fill_height = height // 2 - src_h // 2
res.paste(resized.resize((width, fill_height), box=(0, 0, width, 0)), box=(0, 0))
res.paste(resized.resize((width, fill_height), box=(0, resized.height, width, resized.height)), box=(0, fill_height + src_h))
elif ratio > src_ratio:
fill_width = width // 2 - src_w // 2
res.paste(resized.resize((fill_width, height), box=(0, 0, 0, height)), box=(0, 0))
res.paste(resized.resize((fill_width, height), box=(resized.width, 0, resized.width, height)), box=(fill_width + src_w, 0))
return res
#
# helper fft routines that keep ortho normalization and auto-shift before and after fft
def _fft2(self, data):
if data.ndim > 2: # has channels
out_fft = np.zeros((data.shape[0], data.shape[1], data.shape[2]), dtype=np.complex128)
for c in range(data.shape[2]):
c_data = data[:,:,c]
out_fft[:,:,c] = np.fft.fft2(np.fft.fftshift(c_data),norm="ortho")
out_fft[:,:,c] = np.fft.ifftshift(out_fft[:,:,c])
else: # one channel
out_fft = np.zeros((data.shape[0], data.shape[1]), dtype=np.complex128)
out_fft[:,:] = np.fft.fft2(np.fft.fftshift(data),norm="ortho")
out_fft[:,:] = np.fft.ifftshift(out_fft[:,:])
return out_fft
def _ifft2(self, data):
if data.ndim > 2: # has channels
out_ifft = np.zeros((data.shape[0], data.shape[1], data.shape[2]), dtype=np.complex128)
for c in range(data.shape[2]):
c_data = data[:,:,c]
out_ifft[:,:,c] = np.fft.ifft2(np.fft.fftshift(c_data),norm="ortho")
out_ifft[:,:,c] = np.fft.ifftshift(out_ifft[:,:,c])
else: # one channel
out_ifft = np.zeros((data.shape[0], data.shape[1]), dtype=np.complex128)
out_ifft[:,:] = np.fft.ifft2(np.fft.fftshift(data),norm="ortho")
out_ifft[:,:] = np.fft.ifftshift(out_ifft[:,:])
return out_ifft
def _get_gaussian_window(self, width, height, std=3.14, mode=0):
window_scale_x = float(width / min(width, height))
window_scale_y = float(height / min(width, height))
window = np.zeros((width, height))
x = (np.arange(width) / width * 2. - 1.) * window_scale_x
for y in range(height):
fy = (y / height * 2. - 1.) * window_scale_y
if mode == 0:
window[:, y] = np.exp(-(x**2+fy**2) * std)
else:
window[:, y] = (1/((x**2+1.) * (fy**2+1.))) ** (std/3.14) # hey wait a minute that's not gaussian
return window
def _get_masked_window_rgb(self, np_mask_grey, hardness=1.):
np_mask_rgb = np.zeros((np_mask_grey.shape[0], np_mask_grey.shape[1], 3))
if hardness != 1.:
hardened = np_mask_grey[:] ** hardness
else:
hardened = np_mask_grey[:]
for c in range(3):
np_mask_rgb[:,:,c] = hardened[:]
return np_mask_rgb
def get_matched_noise(self, _np_src_image, np_mask_rgb, noise_q, color_variation):
"""
Explanation:
Getting good results in/out-painting with stable diffusion can be challenging.
Although there are simpler effective solutions for in-painting, out-painting can be especially challenging because there is no color data
in the masked area to help prompt the generator. Ideally, even for in-painting we'd like work effectively without that data as well.
Provided here is my take on a potential solution to this problem.
By taking a fourier transform of the masked src img we get a function that tells us the presence and orientation of each feature scale in the unmasked src.
Shaping the init/seed noise for in/outpainting to the same distribution of feature scales, orientations, and positions increases output coherence
by helping keep features aligned. This technique is applicable to any continuous generation task such as audio or video, each of which can
be conceptualized as a series of out-painting steps where the last half of the input "frame" is erased. For multi-channel data such as color
or stereo sound the "color tone" or histogram of the seed noise can be matched to improve quality (using scikit-image currently)
This method is quite robust and has the added benefit of being fast independently of the size of the out-painted area.
The effects of this method include things like helping the generator integrate the pre-existing view distance and camera angle.
Carefully managing color and brightness with histogram matching is also essential to achieving good coherence.
noise_q controls the exponent in the fall-off of the distribution can be any positive number, lower values means higher detail (range > 0, default 1.)
color_variation controls how much freedom is allowed for the colors/palette of the out-painted area (range 0..1, default 0.01)
This code is provided as is under the Unlicense (https://unlicense.org/)
Although you have no obligation to do so, if you found this code helpful please find it in your heart to credit me [parlance-zz].
Questions or comments can be sent to parlance@fifth-harmonic.com (https://github.com/parlance-zz/)
This code is part of a new branch of a discord bot I am working on integrating with diffusers (https://github.com/parlance-zz/g-diffuser-bot)
"""
global DEBUG_MODE
global TMP_ROOT_PATH
width = _np_src_image.shape[0]
height = _np_src_image.shape[1]
num_channels = _np_src_image.shape[2]
np_src_image = _np_src_image[:] * (1. - np_mask_rgb)
np_mask_grey = (np.sum(np_mask_rgb, axis=2)/3.)
np_src_grey = (np.sum(np_src_image, axis=2)/3.)
all_mask = np.ones((width, height), dtype=bool)
img_mask = np_mask_grey > 1e-6
ref_mask = np_mask_grey < 1e-3
windowed_image = _np_src_image * (1.-self._get_masked_window_rgb(np_mask_grey))
windowed_image /= np.max(windowed_image)
windowed_image += np.average(_np_src_image) * np_mask_rgb# / (1.-np.average(np_mask_rgb)) # rather than leave the masked area black, we get better results from fft by filling the average unmasked color
#windowed_image += np.average(_np_src_image) * (np_mask_rgb * (1.- np_mask_rgb)) / (1.-np.average(np_mask_rgb)) # compensate for darkening across the mask transition area
#_save_debug_img(windowed_image, "windowed_src_img")
src_fft = self._fft2(windowed_image) # get feature statistics from masked src img
src_dist = np.absolute(src_fft)
src_phase = src_fft / src_dist
#_save_debug_img(src_dist, "windowed_src_dist")
noise_window = self._get_gaussian_window(width, height, mode=1) # start with simple gaussian noise
noise_rgb = np.random.random_sample((width, height, num_channels))
noise_grey = (np.sum(noise_rgb, axis=2)/3.)
noise_rgb *= color_variation # the colorfulness of the starting noise is blended to greyscale with a parameter
for c in range(num_channels):
noise_rgb[:,:,c] += (1. - color_variation) * noise_grey
noise_fft = self._fft2(noise_rgb)
for c in range(num_channels):
noise_fft[:,:,c] *= noise_window
noise_rgb = np.real(self._ifft2(noise_fft))
shaped_noise_fft = self._fft2(noise_rgb)
shaped_noise_fft[:,:,:] = np.absolute(shaped_noise_fft[:,:,:])**2 * (src_dist ** noise_q) * src_phase # perform the actual shaping
brightness_variation = 0.#color_variation # todo: temporarily tieing brightness variation to color variation for now
contrast_adjusted_np_src = _np_src_image[:] * (brightness_variation + 1.) - brightness_variation * 2.
# scikit-image is used for histogram matching, very convenient!
shaped_noise = np.real(self._ifft2(shaped_noise_fft))
shaped_noise -= np.min(shaped_noise)
shaped_noise /= np.max(shaped_noise)
shaped_noise[img_mask,:] = skimage.exposure.match_histograms(shaped_noise[img_mask,:]**1., contrast_adjusted_np_src[ref_mask,:], channel_axis=1)
shaped_noise = _np_src_image[:] * (1. - np_mask_rgb) + shaped_noise * np_mask_rgb
#_save_debug_img(shaped_noise, "shaped_noise")
matched_noise = np.zeros((width, height, num_channels))
matched_noise = shaped_noise[:]
#matched_noise[all_mask,:] = skimage.exposure.match_histograms(shaped_noise[all_mask,:], _np_src_image[ref_mask,:], channel_axis=1)
#matched_noise = _np_src_image[:] * (1. - np_mask_rgb) + matched_noise * np_mask_rgb
#_save_debug_img(matched_noise, "matched_noise")
"""
todo:
color_variation doesnt have to be a single number, the overall color tone of the out-painted area could be param controlled
"""
return np.clip(matched_noise, 0., 1.)
def find_noise_for_image(self, model, device, init_image, prompt, steps=200, cond_scale=2.0, verbose=False, normalize=False, generation_callback=None):
image = np.array(init_image).astype(np.float32) / 255.0
image = image[None].transpose(0, 3, 1, 2)
image = torch.from_numpy(image)
image = 2. * image - 1.
image = image.to(device)
x = model.get_first_stage_encoding(model.encode_first_stage(image))
uncond = model.get_learned_conditioning([''])
cond = model.get_learned_conditioning([prompt])
s_in = x.new_ones([x.shape[0]])
dnw = K.external.CompVisDenoiser(model)
sigmas = dnw.get_sigmas(steps).flip(0)
if verbose:
print(sigmas)
for i in tqdm.trange(1, len(sigmas)):
x_in = torch.cat([x] * 2)
sigma_in = torch.cat([sigmas[i - 1] * s_in] * 2)
cond_in = torch.cat([uncond, cond])
c_out, c_in = [K.utils.append_dims(k, x_in.ndim) for k in dnw.get_scalings(sigma_in)]
if i == 1:
t = dnw.sigma_to_t(torch.cat([sigmas[i] * s_in] * 2))
else:
t = dnw.sigma_to_t(sigma_in)
eps = model.apply_model(x_in * c_in, t, cond=cond_in)
denoised_uncond, denoised_cond = (x_in + eps * c_out).chunk(2)
denoised = denoised_uncond + (denoised_cond - denoised_uncond) * cond_scale
if i == 1:
d = (x - denoised) / (2 * sigmas[i])
else:
d = (x - denoised) / sigmas[i - 1]
dt = sigmas[i] - sigmas[i - 1]
x = x + d * dt
return x / sigmas[-1]
def generate(self, prompt: str, init_img=None, init_mask=None, mask_mode='mask', resize_mode='resize', noise_mode='seed',
denoising_strength:float=0.8, ddim_steps=50, sampler_name='k_lms', n_iter=1, batch_size=1, cfg_scale=7.5, seed=None,
height=512, width=512, save_individual_images: bool = True, save_grid: bool = True, ddim_eta:float = 0.0):
seed = seed_to_int(seed)
image_dict = {
"seed": seed
}
# Init image is assumed to be a PIL image
init_img = self.resize_image('resize', init_img, width, height)
if sampler_name == 'PLMS':
sampler = PLMSSampler(self.model)
elif sampler_name == 'DDIM':
sampler = DDIMSampler(self.model)
elif sampler_name == 'k_dpm_2_a':
sampler = KDiffusionSampler(self.model,'dpm_2_ancestral')
elif sampler_name == 'k_dpm_2':
sampler = KDiffusionSampler(self.model,'dpm_2')
elif sampler_name == 'k_euler_a':
sampler = KDiffusionSampler(self.model,'euler_ancestral')
elif sampler_name == 'k_euler':
sampler = KDiffusionSampler(self.model,'euler')
elif sampler_name == 'k_heun':
sampler = KDiffusionSampler(self.model,'heun')
elif sampler_name == 'k_lms':
sampler = KDiffusionSampler(self.model,'lms')
else:
raise Exception("Unknown sampler: " + sampler_name)
torch_gc()
def process_init_mask(init_mask: PIL.Image):
if init_mask.mode == "RGBA":
init_mask = init_mask.convert('RGBA')
background = PIL.Image.new('RGBA', init_mask.size, (0, 0, 0))
init_mask = PIL.Image.alpha_composite(background, init_mask)
init_mask = init_mask.convert('RGB')
return init_mask
if mask_mode == "mask":
if init_mask:
init_mask = process_init_mask(init_mask)
elif mask_mode == "invert":
if init_mask:
init_mask = process_init_mask(init_mask)
init_mask = PIL.ImageOps.invert(init_mask)
elif mask_mode == "alpha":
init_img_transparency = init_img.split()[-1].convert('L')#.point(lambda x: 255 if x > 0 else 0, mode='1')
init_mask = init_img_transparency
init_mask = init_mask.convert("RGB")
init_mask = self.resize_image(resize_mode, init_mask, width, height)
init_mask = init_mask.convert("RGB")
assert 0. <= denoising_strength <= 1., 'can only work with strength in [0.0, 1.0]'
t_enc = int(denoising_strength * ddim_steps)
if init_mask is not None and (noise_mode == "matched" or noise_mode == "find_and_matched") and init_img is not None:
noise_q = 0.99
color_variation = 0.0
mask_blend_factor = 1.0
np_init = (np.asarray(init_img.convert("RGB"))/255.0).astype(np.float64) # annoyingly complex mask fixing
np_mask_rgb = 1. - (np.asarray(PIL.ImageOps.invert(init_mask).convert("RGB"))/255.0).astype(np.float64)
np_mask_rgb -= np.min(np_mask_rgb)
np_mask_rgb /= np.max(np_mask_rgb)
np_mask_rgb = 1. - np_mask_rgb
np_mask_rgb_hardened = 1. - (np_mask_rgb < 0.99).astype(np.float64)
blurred = skimage.filters.gaussian(np_mask_rgb_hardened[:], sigma=16., channel_axis=2, truncate=32.)
blurred2 = skimage.filters.gaussian(np_mask_rgb_hardened[:], sigma=16., channel_axis=2, truncate=32.)
#np_mask_rgb_dilated = np_mask_rgb + blurred # fixup mask todo: derive magic constants
#np_mask_rgb = np_mask_rgb + blurred
np_mask_rgb_dilated = np.clip((np_mask_rgb + blurred2) * 0.7071, 0., 1.)
np_mask_rgb = np.clip((np_mask_rgb + blurred) * 0.7071, 0., 1.)
noise_rgb = self.get_matched_noise(np_init, np_mask_rgb, noise_q, color_variation)
blend_mask_rgb = np.clip(np_mask_rgb_dilated,0.,1.) ** (mask_blend_factor)
noised = noise_rgb[:]
blend_mask_rgb **= (2.)
noised = np_init[:] * (1. - blend_mask_rgb) + noised * blend_mask_rgb
np_mask_grey = np.sum(np_mask_rgb, axis=2)/3.
ref_mask = np_mask_grey < 1e-3
all_mask = np.ones((height, width), dtype=bool)
noised[all_mask,:] = skimage.exposure.match_histograms(noised[all_mask,:]**1., noised[ref_mask,:], channel_axis=1)
init_img = PIL.Image.fromarray(np.clip(noised * 255., 0., 255.).astype(np.uint8), mode="RGB")
def init():
image = init_img.convert('RGB')
image = np.array(image).astype(np.float32) / 255.0
image = image[None].transpose(0, 3, 1, 2)
image = torch.from_numpy(image)
mask_channel = None
if init_mask:
alpha = self.resize_image(resize_mode, init_mask, width // 8, height // 8)
mask_channel = alpha.split()[-1]
mask = None
if mask_channel is not None:
mask = np.array(mask_channel).astype(np.float32) / 255.0
mask = (1 - mask)
mask = np.tile(mask, (4, 1, 1))
mask = mask[None].transpose(0, 1, 2, 3)
mask = torch.from_numpy(mask).to(self.model.device)
init_image = 2. * image - 1.
init_image = init_image.to(self.model.device)
init_latent = self.model.get_first_stage_encoding(self.model.encode_first_stage(init_image)) # move to latent space
return init_latent, mask,
def sample(init_data, x, conditioning, unconditional_conditioning, sampler_name):
t_enc_steps = t_enc
obliterate = False
if ddim_steps == t_enc_steps:
t_enc_steps = t_enc_steps - 1
obliterate = True
if sampler_name != 'DDIM':
x0, z_mask = init_data
sigmas = sampler.model_wrap.get_sigmas(ddim_steps)
noise = x * sigmas[ddim_steps - t_enc_steps - 1]
xi = x0 + noise
# Obliterate masked image
if z_mask is not None and obliterate:
random = torch.randn(z_mask.shape, device=xi.device)
xi = (z_mask * noise) + ((1-z_mask) * xi)
sigma_sched = sigmas[ddim_steps - t_enc_steps - 1:]
model_wrap_cfg = CFGMaskedDenoiser(sampler.model_wrap)
samples_ddim = K.sampling.__dict__[f'sample_{sampler.get_sampler_name()}'](model_wrap_cfg, xi, sigma_sched,
extra_args={'cond': conditioning, 'uncond': unconditional_conditioning,
'cond_scale': cfg_scale, 'mask': z_mask, 'x0': x0, 'xi': xi}, disable=False)
else:
x0, z_mask = init_data
sampler.make_schedule(ddim_num_steps=ddim_steps, ddim_eta=0.0, verbose=False)
z_enc = sampler.stochastic_encode(x0, torch.tensor([t_enc_steps]*batch_size).to(self.model.device))
# Obliterate masked image
if z_mask is not None and obliterate:
random = torch.randn(z_mask.shape, device=z_enc.device)
z_enc = (z_mask * random) + ((1-z_mask) * z_enc)
# decode it
samples_ddim = sampler.decode(z_enc, conditioning, t_enc_steps,
unconditional_guidance_scale=cfg_scale,
unconditional_conditioning=unconditional_conditioning,
z_mask=z_mask, x0=x0)
return samples_ddim
torch_gc()
if self.load_concepts and self.concepts_dir is not None:
prompt_tokens = re.findall('<([a-zA-Z0-9-]+)>', prompt)
if prompt_tokens:
self.process_prompt_tokens(prompt_tokens)
os.makedirs(self.output_dir, exist_ok=True)
sample_path = os.path.join(self.output_dir, "samples")
os.makedirs(sample_path, exist_ok=True)
if self.verify_input:
try:
check_prompt_length(self.model, prompt, self.comments)
except:
import traceback
print("Error verifying input:", file=sys.stderr)
print(traceback.format_exc(), file=sys.stderr)
all_prompts = batch_size * n_iter * [prompt]
all_seeds = [seed + x for x in range(len(all_prompts))]
precision_scope = torch.autocast if self.auto_cast else nullcontext
with torch.no_grad(), precision_scope("cuda"):
for n in range(n_iter):
print(f"Iteration: {n+1}/{n_iter}")
prompts = all_prompts[n * batch_size:(n + 1) * batch_size]
seeds = all_seeds[n * batch_size:(n + 1) * batch_size]
uc = self.model.get_learned_conditioning(len(prompts) * [''])
if isinstance(prompts, tuple):
prompts = list(prompts)
c = self.model.get_learned_conditioning(prompts)
opt_C = 4
opt_f = 8
shape = [opt_C, height // opt_f, width // opt_f]
x = self.create_random_tensors(shape, seeds=seeds)
init_data = init()
samples_ddim = sample(init_data=init_data, x=x, conditioning=c, unconditional_conditioning=uc, sampler_name=sampler_name)
x_samples_ddim = self.model.decode_first_stage(samples_ddim)
x_samples_ddim = torch.clamp((x_samples_ddim + 1.0) / 2.0, min=0.0, max=1.0)
for i, x_sample in enumerate(x_samples_ddim):
sanitized_prompt = slugify(prompts[i])
full_path = os.path.join(os.getcwd(), sample_path)
sample_path_i = sample_path
base_count = get_next_sequence_number(sample_path_i)
filename = f"{base_count:05}-{ddim_steps}_{sampler_name}_{seeds[i]}_{sanitized_prompt}"[:200-len(full_path)]
x_sample = 255. * rearrange(x_sample.cpu().numpy(), 'c h w -> h w c')
x_sample = x_sample.astype(np.uint8)
image = PIL.Image.fromarray(x_sample)
image_dict['image'] = image
self.images.append(image_dict)
if save_individual_images:
path = os.path.join(sample_path, filename + '.' + self.save_extension)
success = save_sample(image, filename, sample_path_i, self.save_extension)
if success:
if self.output_file_path:
self.output_images.append(path)
else:
self.output_images.append(image)
else:
return
self.info = f"""
{prompt}
Steps: {ddim_steps}, Sampler: {sampler_name}, CFG scale: {cfg_scale}, Seed: {seed}
""".strip()
self.stats = f'''
'''
for comment in self.comments:
self.info += "\n\n" + comment
torch_gc()
del sampler
return

View File

@ -0,0 +1,201 @@
import os
import re
import sys
from contextlib import contextmanager, nullcontext
import numpy as np
import PIL
import torch
from einops import rearrange
from ldm.models.diffusion.ddim import DDIMSampler
from ldm.models.diffusion.kdiffusion import KDiffusionSampler
from ldm.models.diffusion.plms import PLMSSampler
from nataili.util.cache import torch_gc
from nataili.util.check_prompt_length import check_prompt_length
from nataili.util.get_next_sequence_number import get_next_sequence_number
from nataili.util.image_grid import image_grid
from nataili.util.load_learned_embed_in_clip import load_learned_embed_in_clip
from nataili.util.save_sample import save_sample
from nataili.util.seed_to_int import seed_to_int
from slugify import slugify
class txt2img:
def __init__(self, model, device, output_dir, save_extension='jpg',
output_file_path=False, load_concepts=False, concepts_dir=None,
verify_input=True, auto_cast=True):
self.model = model
self.output_dir = output_dir
self.output_file_path = output_file_path
self.save_extension = save_extension
self.load_concepts = load_concepts
self.concepts_dir = concepts_dir
self.verify_input = verify_input
self.auto_cast = auto_cast
self.device = device
self.comments = []
self.output_images = []
self.info = ''
self.stats = ''
self.images = []
def create_random_tensors(self, shape, seeds):
xs = []
for seed in seeds:
torch.manual_seed(seed)
# randn results depend on device; gpu and cpu get different results for same seed;
# the way I see it, it's better to do this on CPU, so that everyone gets same result;
# but the original script had it like this so i do not dare change it for now because
# it will break everyone's seeds.
xs.append(torch.randn(shape, device=self.device))
x = torch.stack(xs)
return x
def process_prompt_tokens(self, prompt_tokens):
# compviz codebase
tokenizer = self.model.cond_stage_model.tokenizer
text_encoder = self.model.cond_stage_model.transformer
# diffusers codebase
#tokenizer = pipe.tokenizer
#text_encoder = pipe.text_encoder
ext = ('.pt', '.bin')
for token_name in prompt_tokens:
embedding_path = os.path.join(self.concepts_dir, token_name)
if os.path.exists(embedding_path):
for files in os.listdir(embedding_path):
if files.endswith(ext):
load_learned_embed_in_clip(f"{os.path.join(embedding_path, files)}", text_encoder, tokenizer, f"<{token_name}>")
else:
print(f"Concept {token_name} not found in {self.concepts_dir}")
del tokenizer, text_encoder
return
del tokenizer, text_encoder
def generate(self, prompt: str, ddim_steps=50, sampler_name='k_lms', n_iter=1, batch_size=1, cfg_scale=7.5, seed=None,
height=512, width=512, save_individual_images: bool = True, save_grid: bool = True, ddim_eta:float = 0.0):
seed = seed_to_int(seed)
image_dict = {
"seed": seed
}
negprompt = ''
if '###' in prompt:
prompt, negprompt = prompt.split('###', 1)
prompt = prompt.strip()
negprompt = negprompt.strip()
if sampler_name == 'PLMS':
sampler = PLMSSampler(self.model)
elif sampler_name == 'DDIM':
sampler = DDIMSampler(self.model)
elif sampler_name == 'k_dpm_2_a':
sampler = KDiffusionSampler(self.model,'dpm_2_ancestral')
elif sampler_name == 'k_dpm_2':
sampler = KDiffusionSampler(self.model,'dpm_2')
elif sampler_name == 'k_euler_a':
sampler = KDiffusionSampler(self.model,'euler_ancestral')
elif sampler_name == 'k_euler':
sampler = KDiffusionSampler(self.model,'euler')
elif sampler_name == 'k_heun':
sampler = KDiffusionSampler(self.model,'heun')
elif sampler_name == 'k_lms':
sampler = KDiffusionSampler(self.model,'lms')
else:
raise Exception("Unknown sampler: " + sampler_name)
def sample(init_data, x, conditioning, unconditional_conditioning, sampler_name):
samples_ddim, _ = sampler.sample(S=ddim_steps, conditioning=conditioning, unconditional_guidance_scale=cfg_scale,
unconditional_conditioning=unconditional_conditioning, x_T=x)
return samples_ddim
torch_gc()
if self.load_concepts and self.concepts_dir is not None:
prompt_tokens = re.findall('<([a-zA-Z0-9-]+)>', prompt)
if prompt_tokens:
self.process_prompt_tokens(prompt_tokens)
os.makedirs(self.output_dir, exist_ok=True)
sample_path = os.path.join(self.output_dir, "samples")
os.makedirs(sample_path, exist_ok=True)
if self.verify_input:
try:
check_prompt_length(self.model, prompt, self.comments)
except:
import traceback
print("Error verifying input:", file=sys.stderr)
print(traceback.format_exc(), file=sys.stderr)
all_prompts = batch_size * n_iter * [prompt]
all_seeds = [seed + x for x in range(len(all_prompts))]
precision_scope = torch.autocast if self.auto_cast else nullcontext
with torch.no_grad(), precision_scope("cuda"):
for n in range(n_iter):
print(f"Iteration: {n+1}/{n_iter}")
prompts = all_prompts[n * batch_size:(n + 1) * batch_size]
seeds = all_seeds[n * batch_size:(n + 1) * batch_size]
uc = self.model.get_learned_conditioning(len(prompts) * [negprompt])
if isinstance(prompts, tuple):
prompts = list(prompts)
c = self.model.get_learned_conditioning(prompts)
opt_C = 4
opt_f = 8
shape = [opt_C, height // opt_f, width // opt_f]
x = self.create_random_tensors(shape, seeds=seeds)
samples_ddim = sample(init_data=None, x=x, conditioning=c, unconditional_conditioning=uc, sampler_name=sampler_name)
x_samples_ddim = self.model.decode_first_stage(samples_ddim)
x_samples_ddim = torch.clamp((x_samples_ddim + 1.0) / 2.0, min=0.0, max=1.0)
for i, x_sample in enumerate(x_samples_ddim):
sanitized_prompt = slugify(prompts[i])
full_path = os.path.join(os.getcwd(), sample_path)
sample_path_i = sample_path
base_count = get_next_sequence_number(sample_path_i)
filename = f"{base_count:05}-{ddim_steps}_{sampler_name}_{seeds[i]}_{sanitized_prompt}"[:200-len(full_path)]
x_sample = 255. * rearrange(x_sample.cpu().numpy(), 'c h w -> h w c')
x_sample = x_sample.astype(np.uint8)
image = PIL.Image.fromarray(x_sample)
image_dict['image'] = image
self.images.append(image_dict)
if save_individual_images:
path = os.path.join(sample_path, filename + '.' + self.save_extension)
success = save_sample(image, filename, sample_path_i, self.save_extension)
if success:
if self.output_file_path:
self.output_images.append(path)
else:
self.output_images.append(image)
else:
return
self.info = f"""
{prompt}
Steps: {ddim_steps}, Sampler: {sampler_name}, CFG scale: {cfg_scale}, Seed: {seed}
""".strip()
self.stats = f'''
'''
for comment in self.comments:
self.info += "\n\n" + comment
torch_gc()
del sampler
return

View File

@ -0,0 +1,458 @@
import os
import json
import shutil
import zipfile
import requests
import git
import torch
import hashlib
from ldm.util import instantiate_from_config
from omegaconf import OmegaConf
from transformers import logging
from basicsr.archs.rrdbnet_arch import RRDBNet
from gfpgan import GFPGANer
from realesrgan import RealESRGANer
from ldm.models.blip import blip_decoder
from tqdm import tqdm
import open_clip
import clip
from nataili.util.cache import torch_gc
from nataili.util import logger
logging.set_verbosity_error()
models = json.load(open('./db.json'))
dependencies = json.load(open('./db_dep.json'))
remote_models = "https://raw.githubusercontent.com/Sygil-Dev/nataili-model-reference/main/db.json"
remote_dependencies = "https://raw.githubusercontent.com/Sygil-Dev/nataili-model-reference/main/db_dep.json"
class ModelManager():
def __init__(self, hf_auth=None, download=True):
if download:
try:
logger.init("Model Reference", status="Downloading")
r = requests.get(remote_models)
self.models = r.json()
r = requests.get(remote_dependencies)
self.dependencies = json.load(open('./db_dep.json'))
logger.init_ok("Model Reference", status="OK")
except:
logger.init_err("Model Reference", status="Download Error")
self.models = json.load(open('./db.json'))
self.dependencies = json.load(open('./db_dep.json'))
logger.init_warn("Model Reference", status="Local")
self.available_models = []
self.tainted_models = []
self.available_dependencies = []
self.loaded_models = {}
self.hf_auth = None
self.set_authentication(hf_auth)
def init(self):
dependencies_available = []
for dependency in self.dependencies:
if self.check_available(self.get_dependency_files(dependency)):
dependencies_available.append(dependency)
self.available_dependencies = dependencies_available
models_available = []
for model in self.models:
if self.check_available(self.get_model_files(model)):
models_available.append(model)
self.available_models = models_available
if self.hf_auth is not None:
if 'username' not in self.hf_auth and 'password' not in self.hf_auth:
raise ValueError('hf_auth must contain username and password')
else:
if self.hf_auth['username'] == '' or self.hf_auth['password'] == '':
raise ValueError('hf_auth must contain username and password')
return True
def set_authentication(self, hf_auth=None):
# We do not let No authentication override previously set auth
if not hf_auth and self.hf_auth:
return
self.hf_auth = hf_auth
def get_model(self, model_name):
return self.models.get(model_name)
def get_filtered_models(self, **kwargs):
'''Get all model names.
Can filter based on metadata of the model reference db
'''
filtered_models = self.models
for keyword in kwargs:
iterating_models = filtered_models.copy()
filtered_models = {}
for model in iterating_models:
# logger.debug([keyword,iterating_models[model].get(keyword),kwargs[keyword]])
if iterating_models[model].get(keyword) == kwargs[keyword]:
filtered_models[model] = iterating_models[model]
return filtered_models
def get_filtered_model_names(self, **kwargs):
filtered_models = self.get_filtered_models(**kwargs)
return list(filtered_models.keys())
def get_dependency(self, dependency_name):
return self.dependencies[dependency_name]
def get_model_files(self, model_name):
return self.models[model_name]['config']['files']
def get_dependency_files(self, dependency_name):
return self.dependencies[dependency_name]['config']['files']
def get_model_download(self, model_name):
return self.models[model_name]['config']['download']
def get_dependency_download(self, dependency_name):
return self.dependencies[dependency_name]['config']['download']
def get_available_models(self):
return self.available_models
def get_available_dependencies(self):
return self.available_dependencies
def get_loaded_models(self):
return self.loaded_models
def get_loaded_models_names(self):
return list(self.loaded_models.keys())
def get_loaded_model(self, model_name):
return self.loaded_models[model_name]
def unload_model(self, model_name):
if model_name in self.loaded_models:
del self.loaded_models[model_name]
return True
return False
def unload_all_models(self):
for model in self.loaded_models:
del self.loaded_models[model]
return True
def taint_model(self,model_name):
'''Marks a model as not valid by remiving it from available_models'''
if model_name in self.available_models:
self.available_models.remove(model_name)
self.tainted_models.append(model_name)
def taint_models(self, models):
for model in models:
self.taint_model(model)
def load_model_from_config(self, model_path='', config_path='', map_location="cpu"):
config = OmegaConf.load(config_path)
pl_sd = torch.load(model_path, map_location=map_location)
if "global_step" in pl_sd:
logger.info(f"Global Step: {pl_sd['global_step']}")
sd = pl_sd["state_dict"]
model = instantiate_from_config(config.model)
m, u = model.load_state_dict(sd, strict=False)
model = model.eval()
del pl_sd, sd, m, u
return model
def load_ckpt(self, model_name='', precision='half', gpu_id=0):
ckpt_path = self.get_model_files(model_name)[0]['path']
config_path = self.get_model_files(model_name)[1]['path']
model = self.load_model_from_config(model_path=ckpt_path, config_path=config_path)
device = torch.device(f"cuda:{gpu_id}")
model = (model if precision=='full' else model.half()).to(device)
torch_gc()
return {'model': model, 'device': device}
def load_realesrgan(self, model_name='', precision='half', gpu_id=0):
RealESRGAN_models = {
'RealESRGAN_x4plus': RRDBNet(num_in_ch=3, num_out_ch=3, num_feat=64, num_block=23, num_grow_ch=32, scale=4),
'RealESRGAN_x4plus_anime_6B': RRDBNet(num_in_ch=3, num_out_ch=3, num_feat=64, num_block=6, num_grow_ch=32, scale=4)
}
model_path = self.get_model_files(model_name)[0]['path']
device = torch.device(f"cuda:{gpu_id}")
model = RealESRGANer(scale=2, model_path=model_path, model=RealESRGAN_models[models[model_name]['name']],
pre_pad=0, half=True if precision == 'half' else False, device=device)
return {'model': model, 'device': device}
def load_gfpgan(self, model_name='', gpu_id=0):
model_path = self.get_model_files(model_name)[0]['path']
device = torch.device(f"cuda:{gpu_id}")
model = GFPGANer(model_path=model_path, upscale=1, arch='clean',
channel_multiplier=2, bg_upsampler=None, device=device)
return {'model': model, 'device': device}
def load_blip(self, model_name='', precision='half', gpu_id=0, blip_image_eval_size=512, vit='base'):
# vit = 'base' or 'large'
model_path = self.get_model_files(model_name)[0]['path']
device = torch.device(f"cuda:{gpu_id}")
model = blip_decoder(pretrained=model_path,
med_config="configs/blip/med_config.json",
image_size=blip_image_eval_size, vit=vit)
model = model.eval()
model = (model if precision=='full' else model.half()).to(device)
return {'model': model, 'device': device}
def load_open_clip(self, model_name='', precision='half', gpu_id=0):
pretrained = self.get_model(model_name)['pretrained_name']
device = torch.device(f"cuda:{gpu_id}")
model, _, preprocesses = open_clip.create_model_and_transforms(model_name, pretrained=pretrained, cache_dir='models/clip')
model = model.eval()
model = (model if precision=='full' else model.half()).to(device)
return {'model': model, 'device': device, 'preprocesses': preprocesses}
def load_clip(self, model_name='', precision='half', gpu_id=0):
device = torch.device(f"cuda:{gpu_id}")
model, preprocesses = clip.load(model_name, device=device, download_root='models/clip')
model = model.eval()
model = (model if precision=='full' else model.half()).to(device)
return {'model': model, 'device': device, 'preprocesses': preprocesses}
def load_model(self, model_name='', precision='half', gpu_id=0):
if model_name not in self.available_models:
return False
if self.models[model_name]['type'] == 'ckpt':
self.loaded_models[model_name] = self.load_ckpt(model_name, precision, gpu_id)
return True
elif self.models[model_name]['type'] == 'realesrgan':
self.loaded_models[model_name] = self.load_realesrgan(model_name, precision, gpu_id)
return True
elif self.models[model_name]['type'] == 'gfpgan':
self.loaded_models[model_name] = self.load_gfpgan(model_name, gpu_id)
return True
elif self.models[model_name]['type'] == 'blip':
self.loaded_models[model_name] = self.load_blip(model_name, precision, gpu_id, 512, 'base')
return True
elif self.models[model_name]['type'] == 'open_clip':
self.loaded_models[model_name] = self.load_open_clip(model_name, precision, gpu_id)
return True
elif self.models[model_name]['type'] == 'clip':
self.loaded_models[model_name] = self.load_clip(model_name, precision, gpu_id)
return True
else:
return False
def validate_model(self, model_name):
files = self.get_model_files(model_name)
all_ok = True
for file_details in files:
if not self.check_file_available(file_details['path']):
return False
if not self.validate_file(file_details):
return False
return True
def validate_file(self, file_details):
if 'md5sum' in file_details:
file_name = file_details['path']
logger.debug(f"Getting md5sum of {file_name}")
with open(file_name, 'rb') as file_to_check:
file_hash = hashlib.md5()
while chunk := file_to_check.read(8192):
file_hash.update(chunk)
if file_details['md5sum'] != file_hash.hexdigest():
return False
return True
def check_file_available(self, file_path):
return os.path.exists(file_path)
def check_available(self, files):
available = True
for file in files:
if not self.check_file_available(file['path']):
available = False
return available
def download_file(self, url, file_path):
# make directory
os.makedirs(os.path.dirname(file_path), exist_ok=True)
pbar_desc = file_path.split('/')[-1]
r = requests.get(url, stream=True)
with open(file_path, 'wb') as f:
with tqdm(
# all optional kwargs
unit='B', unit_scale=True, unit_divisor=1024, miniters=1,
desc=pbar_desc, total=int(r.headers.get('content-length', 0))
) as pbar:
for chunk in r.iter_content(chunk_size=16*1024):
if chunk:
f.write(chunk)
pbar.update(len(chunk))
def download_model(self, model_name):
if model_name in self.available_models:
logger.info(f"{model_name} is already available.")
return True
download = self.get_model_download(model_name)
files = self.get_model_files(model_name)
for i in range(len(download)):
file_path = f"{download[i]['file_path']}/{download[i]['file_name']}" if 'file_path' in download[i] else files[i]['path']
if 'file_url' in download[i]:
download_url = download[i]['file_url']
if 'hf_auth' in download[i]:
username = self.hf_auth['username']
password = self.hf_auth['password']
download_url = download_url.format(username=username, password=password)
if 'file_name' in download[i]:
download_name = download[i]['file_name']
if 'file_path' in download[i]:
download_path = download[i]['file_path']
if 'manual' in download[i]:
logger.warning(f"The model {model_name} requires manual download from {download_url}. Please place it in {download_path}/{download_name} then press ENTER to continue...")
input('')
continue
# TODO: simplify
if "file_content" in download[i]:
file_content = download[i]['file_content']
logger.info(f"writing {file_content} to {file_path}")
# make directory download_path
os.makedirs(download_path, exist_ok=True)
# write file_content to download_path/download_name
with open(os.path.join(download_path, download_name), 'w') as f:
f.write(file_content)
elif 'symlink' in download[i]:
logger.info(f"symlink {file_path} to {download[i]['symlink']}")
symlink = download[i]['symlink']
# make directory symlink
os.makedirs(download_path, exist_ok=True)
# make symlink from download_path/download_name to symlink
os.symlink(symlink, os.path.join(download_path, download_name))
elif 'git' in download[i]:
logger.info(f"git clone {download_url} to {file_path}")
# make directory download_path
os.makedirs(file_path, exist_ok=True)
git.Git(file_path).clone(download_url)
if 'post_process' in download[i]:
for post_process in download[i]['post_process']:
if 'delete' in post_process:
# delete folder post_process['delete']
logger.info(f"delete {post_process['delete']}")
try:
shutil.rmtree(post_process['delete'])
except PermissionError as e:
logger.error(f"[!] Something went wrong while deleting the `{post_process['delete']}`. Please delete it manually.")
logger.error("PermissionError: ", e)
else:
if not self.check_file_available(file_path) or model_name in self.tainted_models:
logger.debug(f'Downloading {download_url} to {file_path}')
self.download_file(download_url, file_path)
if not self.validate_model(model_name):
return False
if model_name in self.tainted_models:
self.tainted_models.remove(model_name)
self.init()
return True
def download_dependency(self, dependency_name):
if dependency_name in self.available_dependencies:
logger.info(f"{dependency_name} is already installed.")
return True
download = self.get_dependency_download(dependency_name)
files = self.get_dependency_files(dependency_name)
for i in range(len(download)):
if "git" in download[i]:
logger.warning("git download not implemented yet")
break
file_path = files[i]['path']
if 'file_url' in download[i]:
download_url = download[i]['file_url']
if 'file_name' in download[i]:
download_name = download[i]['file_name']
if 'file_path' in download[i]:
download_path = download[i]['file_path']
logger.debug(download_name)
if "unzip" in download[i]:
zip_path = f'temp/{download_name}.zip'
# os dirname zip_path
# mkdir temp
os.makedirs("temp", exist_ok=True)
self.download_file(download_url, zip_path)
logger.info(f"unzip {zip_path}")
with zipfile.ZipFile(zip_path, 'r') as zip_ref:
zip_ref.extractall('temp/')
# move temp/sd-concepts-library-main/sd-concepts-library to download_path
logger.info(f"move temp/{download_name}-main/{download_name} to {download_path}")
shutil.move(f"temp/{download_name}-main/{download_name}", download_path)
logger.info(f"delete {zip_path}")
os.remove(zip_path)
logger.info(f"delete temp/{download_name}-main/")
shutil.rmtree(f"temp/{download_name}-main")
else:
if not self.check_file_available(file_path):
logger.init(f'{file_path}', status="Downloading")
self.download_file(download_url, file_path)
self.init()
return True
def download_all_models(self):
for model in self.get_filtered_model_names(download_all = True):
if not self.check_model_available(model):
logger.init(f"{model}", status="Downloading")
self.download_model(model)
else:
logger.info(f"{model} is already downloaded.")
return True
def download_all_dependencies(self):
for dependency in self.dependencies:
if not self.check_dependency_available(dependency):
logger.init(f"{dependency}",status="Downloading")
self.download_dependency(dependency)
else:
logger.info(f"{dependency} is already installed.")
return True
def download_all(self):
self.download_all_dependencies()
self.download_all_models()
return True
def check_all_available(self):
for model in self.models:
if not self.check_available(self.get_model_files(model)):
return False
for dependency in self.dependencies:
if not self.check_available(self.get_dependency_files(dependency)):
return False
return True
def check_model_available(self, model_name):
if model_name not in self.models:
return False
return self.check_available(self.get_model_files(model_name))
def check_dependency_available(self, dependency_name):
if dependency_name not in self.dependencies:
return False
return self.check_available(self.get_dependency_files(dependency_name))
def check_all_available(self):
for model in self.models:
if not self.check_model_available(model):
return False
for dependency in self.dependencies:
if not self.check_dependency_available(dependency):
return False
return True

View File

View File

@ -0,0 +1,48 @@
# Class realesrgan
# Inputs:
# - model
# - device
# - output_dir
# - output_ext
# outupts:
# - output_images
import PIL
from torchvision import transforms
import numpy as np
import os
import cv2
from nataili.util.save_sample import save_sample
class realesrgan:
def __init__(self, model, device, output_dir, output_ext='jpg'):
self.model = model
self.device = device
self.output_dir = output_dir
self.output_ext = output_ext
self.output_images = []
def generate(self, input_image):
# load image
img = cv2.imread(input_image, cv2.IMREAD_UNCHANGED)
if len(img.shape) == 3 and img.shape[2] == 4:
img_mode = 'RGBA'
else:
img_mode = None
# upscale
output, _ = self.model.enhance(img)
if img_mode == 'RGBA': # RGBA images should be saved in png format
self.output_ext = 'png'
esrgan_sample = output[:,:,::-1]
esrgan_image = PIL.Image.fromarray(esrgan_sample)
# append model name to output image name
filename = os.path.basename(input_image)
filename = os.path.splitext(filename)[0]
filename = f'{filename}_esrgan'
filename_with_ext = f'{filename}.{self.output_ext}'
output_image = os.path.join(self.output_dir, filename_with_ext)
save_sample(esrgan_image, filename, self.output_dir, self.output_ext)
self.output_images.append(output_image)
return

View File

View File

@ -0,0 +1,48 @@
# Class realesrgan
# Inputs:
# - model
# - device
# - output_dir
# - output_ext
# outupts:
# - output_images
import PIL
from torchvision import transforms
import numpy as np
import os
import cv2
from nataili.util.save_sample import save_sample
class realesrgan:
def __init__(self, model, device, output_dir, output_ext='jpg'):
self.model = model
self.device = device
self.output_dir = output_dir
self.output_ext = output_ext
self.output_images = []
def generate(self, input_image):
# load image
img = cv2.imread(input_image, cv2.IMREAD_UNCHANGED)
if len(img.shape) == 3 and img.shape[2] == 4:
img_mode = 'RGBA'
else:
img_mode = None
# upscale
output, _ = self.model.enhance(img)
if img_mode == 'RGBA': # RGBA images should be saved in png format
self.output_ext = 'png'
esrgan_sample = output[:,:,::-1]
esrgan_image = PIL.Image.fromarray(esrgan_sample)
# append model name to output image name
filename = os.path.basename(input_image)
filename = os.path.splitext(filename)[0]
filename = f'{filename}_esrgan'
filename_with_ext = f'{filename}.{self.output_ext}'
output_image = os.path.join(self.output_dir, filename_with_ext)
save_sample(esrgan_image, filename, self.output_dir, self.output_ext)
self.output_images.append(output_image)
return

View File

@ -0,0 +1 @@
from nataili.util.logger import logger,set_logger_verbosity, quiesce_logger, test_logger

View File

@ -0,0 +1,16 @@
import gc
import torch
import threading
import pynvml
import time
with torch.no_grad():
def torch_gc():
for _ in range(2):
gc.collect()
torch.cuda.empty_cache()
torch.cuda.ipc_collect()
torch.cuda.synchronize()
torch.cuda.reset_peak_memory_stats()
torch.cuda.reset_accumulated_memory_stats()

View File

@ -0,0 +1,18 @@
def check_prompt_length(model, prompt, comments):
"""this function tests if prompt is too long, and if so, adds a message to comments"""
tokenizer = model.cond_stage_model.tokenizer
max_length = model.cond_stage_model.max_length
info = model.cond_stage_model.tokenizer([prompt], truncation=True, max_length=max_length,
return_overflowing_tokens=True, padding="max_length", return_tensors="pt")
ovf = info['overflowing_tokens'][0]
overflowing_count = ovf.shape[0]
if overflowing_count == 0:
return
vocab = {v: k for k, v in tokenizer.get_vocab().items()}
overflowing_words = [vocab.get(int(x), "") for x in ovf]
overflowing_text = tokenizer.convert_tokens_to_string(''.join(overflowing_words))
comments.append(f"Warning: too many input tokens; some ({len(overflowing_words)}) have been truncated:\n{overflowing_text}\n")
del tokenizer

View File

@ -0,0 +1,22 @@
from pathlib import Path
def get_next_sequence_number(path, prefix=''):
"""
Determines and returns the next sequence number to use when saving an
image in the specified directory.
If a prefix is given, only consider files whose names start with that
prefix, and strip the prefix from filenames before extracting their
sequence number.
The sequence starts at 0.
"""
result = -1
for p in Path(path).iterdir():
if p.name.endswith(('.png', '.jpg')) and p.name.startswith(prefix):
tmp = p.name[len(prefix):]
try:
result = max(int(tmp.split('-')[0]), result)
except ValueError:
pass
return result + 1

View File

@ -0,0 +1,21 @@
import math
import PIL
def image_grid(imgs, n_rows=None):
if n_rows is not None:
rows = n_rows
else:
rows = math.sqrt(len(imgs))
rows = round(rows)
cols = math.ceil(len(imgs) / rows)
w, h = imgs[0].size
grid = PIL.Image.new('RGB', size=(cols * w, rows * h), color='black')
for i, img in enumerate(imgs):
grid.paste(img, box=(i % cols * w, i // cols * h))
return grid

View File

@ -0,0 +1,40 @@
import os
import torch
def load_learned_embed_in_clip(learned_embeds_path, text_encoder, tokenizer, token=None):
loaded_learned_embeds = torch.load(learned_embeds_path, map_location="cpu")
# separate token and the embeds
if learned_embeds_path.endswith('.pt'):
# old format
# token = * so replace with file directory name when converting
trained_token = os.path.basename(learned_embeds_path)
params_dict = {
trained_token: torch.tensor(list(loaded_learned_embeds['string_to_param'].items())[0][1])
}
learned_embeds_path = os.path.splitext(learned_embeds_path)[0] + '.bin'
torch.save(params_dict, learned_embeds_path)
loaded_learned_embeds = torch.load(learned_embeds_path, map_location="cpu")
trained_token = list(loaded_learned_embeds.keys())[0]
embeds = loaded_learned_embeds[trained_token]
elif learned_embeds_path.endswith('.bin'):
trained_token = list(loaded_learned_embeds.keys())[0]
embeds = loaded_learned_embeds[trained_token]
embeds = loaded_learned_embeds[trained_token]
# cast to dtype of text_encoder
dtype = text_encoder.get_input_embeddings().weight.dtype
embeds.to(dtype)
# add the token in tokenizer
token = token if token is not None else trained_token
num_added_tokens = tokenizer.add_tokens(token)
# resize the token embeddings
text_encoder.resize_token_embeddings(len(tokenizer))
# get the id for the token and assign the embeds
token_id = tokenizer.convert_tokens_to_ids(token)
text_encoder.get_input_embeddings().weight.data[token_id] = embeds
return token

View File

@ -0,0 +1,102 @@
import sys
from functools import partialmethod
from loguru import logger
STDOUT_LEVELS = ["GENERATION", "PROMPT"]
INIT_LEVELS = ["INIT", "INIT_OK", "INIT_WARN", "INIT_ERR"]
MESSAGE_LEVELS = ["MESSAGE"]
# By default we're at error level or higher
verbosity = 20
quiet = 0
def set_logger_verbosity(count):
global verbosity
# The count comes reversed. So count = 0 means minimum verbosity
# While count 5 means maximum verbosity
# So the more count we have, the lowe we drop the versbosity maximum
verbosity = 20 - (count * 10)
def quiesce_logger(count):
global quiet
# The bigger the count, the more silent we want our logger
quiet = count * 10
def is_stdout_log(record):
if record["level"].name not in STDOUT_LEVELS:
return(False)
if record["level"].no < verbosity + quiet:
return(False)
return(True)
def is_init_log(record):
if record["level"].name not in INIT_LEVELS:
return(False)
if record["level"].no < verbosity + quiet:
return(False)
return(True)
def is_msg_log(record):
if record["level"].name not in MESSAGE_LEVELS:
return(False)
if record["level"].no < verbosity + quiet:
return(False)
return(True)
def is_stderr_log(record):
if record["level"].name in STDOUT_LEVELS + INIT_LEVELS + MESSAGE_LEVELS:
return(False)
if record["level"].no < verbosity + quiet:
return(False)
return(True)
def test_logger():
logger.generation("This is a generation message\nIt is typically multiline\nThee Lines".encode("unicode_escape").decode("utf-8"))
logger.prompt("This is a prompt message")
logger.debug("Debug Message")
logger.info("Info Message")
logger.warning("Info Warning")
logger.error("Error Message")
logger.critical("Critical Message")
logger.init("This is an init message", status="Starting")
logger.init_ok("This is an init message", status="OK")
logger.init_warn("This is an init message", status="Warning")
logger.init_err("This is an init message", status="Error")
logger.message("This is user message")
sys.exit()
logfmt = "<level>{level: <10}</level> | <green>{time:YYYY-MM-DD HH:mm:ss}</green> | <green>{name}</green>:<green>{function}</green>:<green>{line}</green> - <level>{message}</level>"
genfmt = "<level>{level: <10}</level> @ <green>{time:YYYY-MM-DD HH:mm:ss}</green> | <level>{message}</level>"
initfmt = "<magenta>INIT </magenta> | <level>{extra[status]: <11}</level> | <magenta>{message}</magenta>"
msgfmt = "<level>{level: <10}</level> | <level>{message}</level>"
try:
logger.level("GENERATION", no=24, color="<cyan>")
logger.level("PROMPT", no=23, color="<yellow>")
logger.level("INIT", no=31, color="<white>")
logger.level("INIT_OK", no=31, color="<green>")
logger.level("INIT_WARN", no=31, color="<yellow>")
logger.level("INIT_ERR", no=31, color="<red>")
# Messages contain important information without which this application might not be able to be used
# As such, they have the highest priority
logger.level("MESSAGE", no=61, color="<green>")
except TypeError:
pass
logger.__class__.generation = partialmethod(logger.__class__.log, "GENERATION")
logger.__class__.prompt = partialmethod(logger.__class__.log, "PROMPT")
logger.__class__.init = partialmethod(logger.__class__.log, "INIT")
logger.__class__.init_ok = partialmethod(logger.__class__.log, "INIT_OK")
logger.__class__.init_warn = partialmethod(logger.__class__.log, "INIT_WARN")
logger.__class__.init_err = partialmethod(logger.__class__.log, "INIT_ERR")
logger.__class__.message = partialmethod(logger.__class__.log, "MESSAGE")
config = {
"handlers": [
{"sink": sys.stderr, "format": logfmt, "colorize":True, "filter": is_stderr_log},
{"sink": sys.stdout, "format": genfmt, "level": "PROMPT", "colorize":True, "filter": is_stdout_log},
{"sink": sys.stdout, "format": initfmt, "level": "INIT", "colorize":True, "filter": is_init_log},
{"sink": sys.stdout, "format": msgfmt, "level": "MESSAGE", "colorize":True, "filter": is_msg_log}
],
}
logger.configure(**config)

View File

@ -0,0 +1,20 @@
import os
def save_sample(image, filename, sample_path, extension='png', jpg_quality=95, webp_quality=95, webp_lossless=True, png_compression=9):
path = os.path.join(sample_path, filename + '.' + extension)
if os.path.exists(path):
return False
if not os.path.exists(sample_path):
os.makedirs(sample_path)
if extension == 'png':
image.save(path, format='PNG', compress_level=png_compression)
elif extension == 'jpg':
image.save(path, quality=jpg_quality, optimize=True)
elif extension == 'webp':
image.save(path, quality=webp_quality, lossless=webp_lossless)
else:
return False
if os.path.exists(path):
return True
else:
return False

View File

@ -0,0 +1,22 @@
import random
def seed_to_int(s):
if type(s) is int:
return s
if s is None or s == '':
return random.randint(0, 2**32 - 1)
if type(s) is list:
seed_list = []
for seed in s:
if seed is None or seed == '':
seed_list.append(random.randint(0, 2**32 - 1))
else:
seed_list = s
return seed_list
n = abs(int(s) if s.isdigit() else random.Random(s).randint(0, 2**32 - 1))
while n >= 2**32:
n = n >> 32
return n

View File

@ -1,6 +1,6 @@
# This file is part of stable-diffusion-webui (https://github.com/sd-webui/stable-diffusion-webui/). # This file is part of sygil-webui (https://github.com/Sygil-Dev/sygil-webui/).
# Copyright 2022 sd-webui team. # Copyright 2022 Sygil-Dev team.
# This program is free software: you can redistribute it and/or modify # This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as published by # it under the terms of the GNU Affero General Public License as published by
# the Free Software Foundation, either version 3 of the License, or # the Free Software Foundation, either version 3 of the License, or

View File

@ -1,6 +1,6 @@
# This file is part of stable-diffusion-webui (https://github.com/sd-webui/stable-diffusion-webui/). # This file is part of sygil-webui (https://github.com/Sygil-Dev/sygil-webui/).
# Copyright 2022 sd-webui team. # Copyright 2022 Sygil-Dev team.
# This program is free software: you can redistribute it and/or modify # This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as published by # it under the terms of the GNU Affero General Public License as published by
# the Free Software Foundation, either version 3 of the License, or # the Free Software Foundation, either version 3 of the License, or
@ -238,7 +238,7 @@ def layout():
with st.container(): with st.container():
if downloaded_concepts_count == 0: if downloaded_concepts_count == 0:
st.write("You don't have any concepts in your library ") st.write("You don't have any concepts in your library ")
st.markdown("To add concepts to your library, download some from the [sd-concepts-library](https://github.com/sd-webui/sd-concepts-library) \ st.markdown("To add concepts to your library, download some from the [sd-concepts-library](https://github.com/Sygil-Dev/sd-concepts-library) \
repository and save the content of `sd-concepts-library` into ```./models/custom/sd-concepts-library``` or just create your own concepts :wink:.", unsafe_allow_html=False) repository and save the content of `sd-concepts-library` into ```./models/custom/sd-concepts-library``` or just create your own concepts :wink:.", unsafe_allow_html=False)
else: else:
if len(st.session_state["results"]) == 0: if len(st.session_state["results"]) == 0:

View File

@ -1,6 +1,6 @@
# This file is part of stable-diffusion-webui (https://github.com/sd-webui/stable-diffusion-webui/). # This file is part of sygil-webui (https://github.com/Sygil-Dev/sygil-webui/).
# Copyright 2022 sd-webui team. # Copyright 2022 Sygil-Dev team.
# This program is free software: you can redistribute it and/or modify # This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as published by # it under the terms of the GNU Affero General Public License as published by
# the Free Software Foundation, either version 3 of the License, or # the Free Software Foundation, either version 3 of the License, or

View File

@ -1,6 +1,6 @@
# This file is part of stable-diffusion-webui (https://github.com/sd-webui/stable-diffusion-webui/). # This file is part of sygil-webui (https://github.com/Sygil-Dev/sygil-webui/).
# Copyright 2022 sd-webui team. # Copyright 2022 Sygil-Dev team.
# This program is free software: you can redistribute it and/or modify # This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as published by # it under the terms of the GNU Affero General Public License as published by
# the Free Software Foundation, either version 3 of the License, or # the Free Software Foundation, either version 3 of the License, or
@ -18,9 +18,9 @@
import gfpgan import gfpgan
import hydralit as st import hydralit as st
# streamlit imports # streamlit imports
from streamlit import StopException, StreamlitAPIException from streamlit import StopException, StreamlitAPIException
#from streamlit.runtime.scriptrunner import script_run_context
#streamlit components section #streamlit components section
from streamlit_server_state import server_state, server_state_lock from streamlit_server_state import server_state, server_state_lock
@ -33,7 +33,7 @@ import streamlit_nested_layout
import warnings import warnings
import json import json
import base64 import base64, cv2
import os, sys, re, random, datetime, time, math, glob, toml import os, sys, re, random, datetime, time, math, glob, toml
import gc import gc
from PIL import Image, ImageFont, ImageDraw, ImageFilter from PIL import Image, ImageFont, ImageDraw, ImageFilter
@ -66,13 +66,31 @@ import piexif.helper
from tqdm import trange from tqdm import trange
from ldm.models.diffusion.ddim import DDIMSampler from ldm.models.diffusion.ddim import DDIMSampler
from ldm.util import ismap from ldm.util import ismap
from typing import Dict #from abc import ABC, abstractmethod
from typing import Dict, Union
from io import BytesIO from io import BytesIO
from packaging import version from packaging import version
from uuid import uuid4
from pathlib import Path
from huggingface_hub import hf_hub_download
#import librosa #import librosa
from logger import logger, set_logger_verbosity, quiesce_logger #from logger import logger, set_logger_verbosity, quiesce_logger
#from loguru import logger #from loguru import logger
#from nataili.inference.compvis.img2img import img2img
#from nataili.model_manager import ModelManager
#from nataili.inference.compvis.txt2img import txt2img
from nataili.util.cache import torch_gc
from nataili.util.logger import logger, set_logger_verbosity, quiesce_logger
try:
from realesrgan import RealESRGANer
from basicsr.archs.rrdbnet_arch import RRDBNet
except ImportError as e:
logger.error("You tried to import realesrgan without having it installed properly. To install Real-ESRGAN, run:\n\n"
"pip install realesrgan")
# Temp imports # Temp imports
#from basicsr.utils.registry import ARCH_REGISTRY #from basicsr.utils.registry import ARCH_REGISTRY
@ -80,12 +98,6 @@ from logger import logger, set_logger_verbosity, quiesce_logger
# end of imports # end of imports
#--------------------------------------------------------------------------------------------------------------- #---------------------------------------------------------------------------------------------------------------
# we make a log file where we store the logs
logger.add("logs/log_{time:MM-DD-YYYY!UTC}.log", rotation="8 MB", compression="zip", level='INFO') # Once the file is too old, it's rotated
logger.add(sys.stderr, diagnose=True)
logger.add(sys.stdout)
logger.enable("")
try: try:
# this silences the annoying "Some weights of the model checkpoint were not used when initializing..." message at start. # this silences the annoying "Some weights of the model checkpoint were not used when initializing..." message at start.
from transformers import logging from transformers import logging
@ -106,6 +118,8 @@ mimetypes.add_type('application/javascript', '.js')
opt_C = 4 opt_C = 4
opt_f = 8 opt_f = 8
# The model manager loads and unloads the SD models and has features to download them or find their location
#model_manager = ModelManager()
def load_configs(): def load_configs():
if not "defaults" in st.session_state: if not "defaults" in st.session_state:
@ -147,7 +161,11 @@ def load_configs():
import modeldownload import modeldownload
modeldownload.updateModels() modeldownload.updateModels()
if "keep_all_models_loaded" in st.session_state: if "keep_all_models_loaded" in st.session_state.defaults.general:
with server_state_lock["keep_all_models_loaded"]:
server_state["keep_all_models_loaded"] = st.session_state["defaults"].general.keep_all_models_loaded
else:
st.session_state["defaults"].general.keep_all_models_loaded = False
with server_state_lock["keep_all_models_loaded"]: with server_state_lock["keep_all_models_loaded"]:
server_state["keep_all_models_loaded"] = st.session_state["defaults"].general.keep_all_models_loaded server_state["keep_all_models_loaded"] = st.session_state["defaults"].general.keep_all_models_loaded
@ -161,38 +179,38 @@ load_configs()
#else: #else:
#app = None #app = None
# should and will be moved to a settings menu in the UI at some point #
grid_format = [s.lower() for s in st.session_state["defaults"].general.grid_format.split(':')] grid_format = st.session_state["defaults"].general.save_format
grid_lossless = False grid_lossless = False
grid_quality = 100 grid_quality = st.session_state["defaults"].general.grid_quality
if grid_format[0] == 'png': if grid_format == 'png':
grid_ext = 'png' grid_ext = 'png'
grid_format = 'png' grid_format = 'png'
elif grid_format[0] in ['jpg', 'jpeg']: elif grid_format in ['jpg', 'jpeg']:
grid_quality = int(grid_format[1]) if len(grid_format) > 1 else 100 grid_quality = int(grid_format) if len(grid_format) > 1 else 100
grid_ext = 'jpg' grid_ext = 'jpg'
grid_format = 'jpeg' grid_format = 'jpeg'
elif grid_format[0] == 'webp': elif grid_format[0] == 'webp':
grid_quality = int(grid_format[1]) if len(grid_format) > 1 else 100 grid_quality = int(grid_format) if len(grid_format) > 1 else 100
grid_ext = 'webp' grid_ext = 'webp'
grid_format = 'webp' grid_format = 'webp'
if grid_quality < 0: # e.g. webp:-100 for lossless mode if grid_quality < 0: # e.g. webp:-100 for lossless mode
grid_lossless = True grid_lossless = True
grid_quality = abs(grid_quality) grid_quality = abs(grid_quality)
# should and will be moved to a settings menu in the UI at some point #
save_format = [s.lower() for s in st.session_state["defaults"].general.save_format.split(':')] save_format = st.session_state["defaults"].general.save_format
save_lossless = False save_lossless = False
save_quality = 100 save_quality = 100
if save_format[0] == 'png': if save_format == 'png':
save_ext = 'png' save_ext = 'png'
save_format = 'png' save_format = 'png'
elif save_format[0] in ['jpg', 'jpeg']: elif save_format in ['jpg', 'jpeg']:
save_quality = int(save_format[1]) if len(save_format) > 1 else 100 save_quality = int(save_format) if len(save_format) > 1 else 100
save_ext = 'jpg' save_ext = 'jpg'
save_format = 'jpeg' save_format = 'jpeg'
elif save_format[0] == 'webp': elif save_format == 'webp':
save_quality = int(save_format[1]) if len(save_format) > 1 else 100 save_quality = int(save_format) if len(save_format) > 1 else 100
save_ext = 'webp' save_ext = 'webp'
save_format = 'webp' save_format = 'webp'
if save_quality < 0: # e.g. webp:-100 for lossless mode if save_quality < 0: # e.g. webp:-100 for lossless mode
@ -248,6 +266,44 @@ def set_page_title(title):
</script>" /> </script>" />
""") """)
def make_grid(n_items=5, n_cols=5):
n_rows = 1 + n_items // int(n_cols)
rows = [st.container() for _ in range(n_rows)]
cols_per_row = [r.columns(n_cols) for r in rows]
cols = [column for row in cols_per_row for column in row]
return cols
def merge(file1, file2, out, weight):
alpha = (weight)/100
if not(file1.endswith(".ckpt")):
file1 += ".ckpt"
if not(file2.endswith(".ckpt")):
file2 += ".ckpt"
if not(out.endswith(".ckpt")):
out += ".ckpt"
#Load Models
model_0 = torch.load(file1)
model_1 = torch.load(file2)
theta_0 = model_0['state_dict']
theta_1 = model_1['state_dict']
for key in theta_0.keys():
if 'model' in key and key in theta_1:
theta_0[key] = (alpha) * theta_0[key] + (1-alpha) * theta_1[key]
logger.info("RUNNING...\n(STAGE 2)")
for key in theta_1.keys():
if 'model' in key and key not in theta_0:
theta_0[key] = theta_1[key]
torch.save(model_0, out)
def human_readable_size(size, decimal_places=3): def human_readable_size(size, decimal_places=3):
"""Return a human readable size from bytes.""" """Return a human readable size from bytes."""
for unit in ['B','KB','MB','GB','TB']: for unit in ['B','KB','MB','GB','TB']:
@ -258,9 +314,11 @@ def human_readable_size(size, decimal_places=3):
def load_models(use_LDSR = False, LDSR_model='model', use_GFPGAN=False, GFPGAN_model='GFPGANv1.4', use_RealESRGAN=False, RealESRGAN_model="RealESRGAN_x4plus", def load_models(use_LDSR = False, LDSR_model='model', use_GFPGAN=False, GFPGAN_model='GFPGANv1.4', use_RealESRGAN=False, RealESRGAN_model="RealESRGAN_x4plus",
CustomModel_available=False, custom_model="Stable Diffusion v1.4"): CustomModel_available=False, custom_model="Stable Diffusion v1.5"):
"""Load the different models. We also reuse the models that are already in memory to speed things up instead of loading them again. """ """Load the different models. We also reuse the models that are already in memory to speed things up instead of loading them again. """
#model_manager.init()
logger.info("Loading models.") logger.info("Loading models.")
if "progress_bar_text" in st.session_state: if "progress_bar_text" in st.session_state:
@ -428,6 +486,7 @@ def load_model_from_config(config, ckpt, verbose=False):
logger.info(f"Loading model from {ckpt}") logger.info(f"Loading model from {ckpt}")
try:
pl_sd = torch.load(ckpt, map_location="cpu") pl_sd = torch.load(ckpt, map_location="cpu")
if "global_step" in pl_sd: if "global_step" in pl_sd:
logger.info(f"Global Step: {pl_sd['global_step']}") logger.info(f"Global Step: {pl_sd['global_step']}")
@ -443,8 +502,18 @@ def load_model_from_config(config, ckpt, verbose=False):
model.cuda() model.cuda()
model.eval() model.eval()
return model return model
except FileNotFoundError:
if "progress_bar_text" in st.session_state:
st.session_state["progress_bar_text"].error(
"You need to download the Stable Diffusion model in order to use the UI. Use the Model Manager page in order to download the model."
)
raise FileNotFoundError("You need to download the Stable Diffusion model in order to use the UI. Use the Model Manager page in order to download the model.")
def load_sd_from_config(ckpt, verbose=False): def load_sd_from_config(ckpt, verbose=False):
logger.info(f"Loading model from {ckpt}") logger.info(f"Loading model from {ckpt}")
@ -454,7 +523,6 @@ def load_sd_from_config(ckpt, verbose=False):
sd = pl_sd["state_dict"] sd = pl_sd["state_dict"]
return sd return sd
class MemUsageMonitor(threading.Thread): class MemUsageMonitor(threading.Thread):
stop_flag = False stop_flag = False
max_usage = 0 max_usage = 0
@ -1319,6 +1387,77 @@ def load_RealESRGAN(model_name: str):
return server_state['RealESRGAN'] return server_state['RealESRGAN']
#
class RealESRGANModel(nn.Module):
def __init__(self, model_path, tile=0, tile_pad=10, pre_pad=0, fp32=False):
super().__init__()
try:
from basicsr.archs.rrdbnet_arch import RRDBNet
from realesrgan import RealESRGANer
except ImportError as e:
logger.error(
"You tried to import realesrgan without having it installed properly. To install Real-ESRGAN, run:\n\n"
"pip install realesrgan"
)
model = RRDBNet(num_in_ch=3, num_out_ch=3, num_feat=64, num_block=23, num_grow_ch=32, scale=4)
self.upsampler = RealESRGANer(
scale=4, model_path=model_path, model=model, tile=tile, tile_pad=tile_pad, pre_pad=pre_pad, half=not fp32
)
def forward(self, image, outscale=4, convert_to_pil=True):
"""Upsample an image array or path.
Args:
image (Union[np.ndarray, str]): Either a np array or an image path. np array is assumed to be in RGB format,
and we convert it to BGR.
outscale (int, optional): Amount to upscale the image. Defaults to 4.
convert_to_pil (bool, optional): If True, return PIL image. Otherwise, return numpy array (BGR). Defaults to True.
Returns:
Union[np.ndarray, PIL.Image.Image]: An upsampled version of the input image.
"""
if isinstance(image, (str, Path)):
img = cv2.imread(image, cv2.IMREAD_UNCHANGED)
else:
img = image
img = (img * 255).round().astype("uint8")
img = img[:, :, ::-1]
image, _ = self.upsampler.enhance(img, outscale=outscale)
if convert_to_pil:
image = Image.fromarray(image[:, :, ::-1])
return image
@classmethod
def from_pretrained(cls, model_name_or_path="nateraw/real-esrgan"):
"""Initialize a pretrained Real-ESRGAN upsampler.
Args:
model_name_or_path (str, optional): The Hugging Face repo ID or path to local model. Defaults to 'nateraw/real-esrgan'.
Returns:
PipelineRealESRGAN: An instance of `PipelineRealESRGAN` instantiated from pretrained model.
"""
# reuploaded form official ones mentioned here:
# https://github.com/xinntao/Real-ESRGAN
if Path(model_name_or_path).exists():
file = model_name_or_path
else:
file = hf_hub_download(model_name_or_path, "RealESRGAN_x4plus.pth")
return cls(file)
def upsample_imagefolder(self, in_dir, out_dir, suffix="out", outfile_ext=".png"):
in_dir, out_dir = Path(in_dir), Path(out_dir)
if not in_dir.exists():
raise FileNotFoundError(f"Provided input directory {in_dir} does not exist")
out_dir.mkdir(exist_ok=True, parents=True)
image_paths = [x for x in in_dir.glob("*") if x.suffix.lower() in [".png", ".jpg", ".jpeg"]]
for image in image_paths:
im = self(str(image))
out_filepath = out_dir / (image.stem + suffix + outfile_ext)
im.save(out_filepath)
# #
@retry(tries=5) @retry(tries=5)
def load_LDSR(model_name="model", config="project", checking=False): def load_LDSR(model_name="model", config="project", checking=False):
@ -1691,15 +1830,20 @@ def image_grid(imgs, batch_size, force_n_rows=None, captions=None):
w, h = imgs[0].size w, h = imgs[0].size
grid = Image.new('RGB', size=(cols * w, rows * h), color='black') grid = Image.new('RGB', size=(cols * w, rows * h), color='black')
try:
fnt = get_font(30) fnt = get_font(30)
except Exception:
pass
for i, img in enumerate(imgs): for i, img in enumerate(imgs):
grid.paste(img, box=(i % cols * w, i // cols * h)) grid.paste(img, box=(i % cols * w, i // cols * h))
try:
if captions and i<len(captions): if captions and i<len(captions):
d = ImageDraw.Draw( grid ) d = ImageDraw.Draw( grid )
size = d.textbbox( (0,0), captions[i], font=fnt, stroke_width=2, align="center" ) size = d.textbbox( (0,0), captions[i], font=fnt, stroke_width=2, align="center" )
d.multiline_text((i % cols * w + w/2, i // cols * h + h - size[3]), captions[i], font=fnt, fill=(255,255,255), stroke_width=2, stroke_fill=(0,0,0), anchor="mm", align="center") d.multiline_text((i % cols * w + w/2, i // cols * h + h - size[3]), captions[i], font=fnt, fill=(255,255,255), stroke_width=2, stroke_fill=(0,0,0), anchor="mm", align="center")
except Exception:
pass
return grid return grid
def seed_to_int(s): def seed_to_int(s):
@ -1708,6 +1852,9 @@ def seed_to_int(s):
if s is None or s == '': if s is None or s == '':
return random.randint(0, 2**32 - 1) return random.randint(0, 2**32 - 1)
if ',' in s:
s = s.split(',')
if type(s) is list: if type(s) is list:
seed_list = [] seed_list = []
for seed in s: for seed in s:
@ -1832,7 +1979,7 @@ def custom_models_available():
with server_state_lock["CustomModel_available"]: with server_state_lock["CustomModel_available"]:
if len(server_state["custom_models"]) > 0: if len(server_state["custom_models"]) > 0:
server_state["CustomModel_available"] = True server_state["CustomModel_available"] = True
server_state["custom_models"].append("Stable Diffusion v1.4") server_state["custom_models"].append("Stable Diffusion v1.5")
else: else:
server_state["CustomModel_available"] = False server_state["CustomModel_available"] = False
@ -1919,6 +2066,7 @@ def save_sample(image, sample_path_i, filename, jpg_sample, prompts, seeds, widt
filename_i = os.path.join(sample_path_i, filename) filename_i = os.path.join(sample_path_i, filename)
if "defaults" in st.session_state:
if st.session_state['defaults'].general.save_metadata or write_info_files: if st.session_state['defaults'].general.save_metadata or write_info_files:
# toggles differ for txt2img vs. img2img: # toggles differ for txt2img vs. img2img:
offset = 0 if init_img is None else 2 offset = 0 if init_img is None else 2
@ -2301,7 +2449,7 @@ def process_images(
full_path = os.path.join(os.getcwd(), sample_path, sanitized_prompt) full_path = os.path.join(os.getcwd(), sample_path, sanitized_prompt)
sanitized_prompt = sanitized_prompt[:200-len(full_path)] sanitized_prompt = sanitized_prompt[:120-len(full_path)]
sample_path_i = os.path.join(sample_path, sanitized_prompt) sample_path_i = os.path.join(sample_path, sanitized_prompt)
#print(f"output folder length: {len(os.path.join(os.getcwd(), sample_path_i))}") #print(f"output folder length: {len(os.path.join(os.getcwd(), sample_path_i))}")
@ -2314,7 +2462,7 @@ def process_images(
full_path = os.path.join(os.getcwd(), sample_path) full_path = os.path.join(os.getcwd(), sample_path)
sample_path_i = sample_path sample_path_i = sample_path
base_count = get_next_sequence_number(sample_path_i) base_count = get_next_sequence_number(sample_path_i)
filename = f"{base_count:05}-{steps}_{sampler_name}_{seeds[i]}_{sanitized_prompt}"[:200-len(full_path)] #same as before filename = f"{base_count:05}-{steps}_{sampler_name}_{seeds[i]}_{sanitized_prompt}"[:120-len(full_path)] #same as before
x_sample = 255. * rearrange(x_sample.cpu().numpy(), 'c h w -> h w c') x_sample = 255. * rearrange(x_sample.cpu().numpy(), 'c h w -> h w c')
x_sample = x_sample.astype(np.uint8) x_sample = x_sample.astype(np.uint8)
@ -2527,7 +2675,7 @@ def process_images(
#output_images.append(image) #output_images.append(image)
#if simple_templating: #if simple_templating:
#grid_captions.append( captions[i] ) #grid_captions.append( captions[i] )
if "defaults" in st.session_state:
if st.session_state['defaults'].general.optimized: if st.session_state['defaults'].general.optimized:
mem = torch.cuda.memory_allocated()/1e6 mem = torch.cuda.memory_allocated()/1e6
server_state["modelFS"].to("cpu") server_state["modelFS"].to("cpu")
@ -2567,7 +2715,7 @@ def process_images(
output_images.insert(0, grid) output_images.insert(0, grid)
grid_count = get_next_sequence_number(outpath, 'grid-') grid_count = get_next_sequence_number(outpath, 'grid-')
grid_file = f"grid-{grid_count:05}-{seed}_{slugify(prompts[i].replace(' ', '_')[:200-len(full_path)])}.{grid_ext}" grid_file = f"grid-{grid_count:05}-{seed}_{slugify(prompts[i].replace(' ', '_')[:120-len(full_path)])}.{grid_ext}"
grid.save(os.path.join(outpath, grid_file), grid_format, quality=grid_quality, lossless=grid_lossless, optimize=True) grid.save(os.path.join(outpath, grid_file), grid_format, quality=grid_quality, lossless=grid_lossless, optimize=True)
toc = time.time() toc = time.time()

View File

@ -10,7 +10,6 @@ import time
import json import json
import torch import torch
from diffusers import ModelMixin
from diffusers.configuration_utils import FrozenDict from diffusers.configuration_utils import FrozenDict
from diffusers.models import AutoencoderKL, UNet2DConditionModel from diffusers.models import AutoencoderKL, UNet2DConditionModel
from diffusers.pipeline_utils import DiffusionPipeline from diffusers.pipeline_utils import DiffusionPipeline
@ -22,59 +21,39 @@ from diffusers.pipelines.stable_diffusion import StableDiffusionPipelineOutput
from transformers import CLIPFeatureExtractor, CLIPTextModel, CLIPTokenizer from transformers import CLIPFeatureExtractor, CLIPTextModel, CLIPTokenizer
from torch import nn from torch import nn
from .upsampling import RealESRGANModel from sd_utils import RealESRGANModel
logger = logging.get_logger(__name__) # pylint: disable=invalid-name logger = logging.get_logger(__name__) # pylint: disable=invalid-name
def get_spec_norm(wav, sr, n_mels=512, hop_length=704): def get_timesteps_arr(audio_filepath, offset, duration, fps=30, margin=1.0, smooth=0.0):
"""Obtain maximum value for each time-frame in Mel Spectrogram, y, sr = librosa.load(audio_filepath, offset=offset, duration=duration)
and normalize between 0 and 1
Borrowed from lucid sonic dreams repo. In there, they programatically determine hop length # librosa.stft hardcoded defaults...
but I really didn't understand what was going on so I removed it and hard coded the output. # n_fft defaults to 2048
""" # hop length is win_length // 4
# win_length defaults to n_fft
D = librosa.stft(y, n_fft=2048, hop_length=2048 // 4, win_length=2048)
# Generate Mel Spectrogram # Extract percussive elements
spec_raw = librosa.feature.melspectrogram(y=wav, sr=sr, n_mels=n_mels, hop_length=hop_length) D_harmonic, D_percussive = librosa.decompose.hpss(D, margin=margin)
y_percussive = librosa.istft(D_percussive, length=len(y))
# Obtain maximum value per time-frame # Get normalized melspectrogram
spec_raw = librosa.feature.melspectrogram(y=y_percussive, sr=sr)
spec_max = np.amax(spec_raw, axis=0) spec_max = np.amax(spec_raw, axis=0)
# Normalize all values between 0 and 1
spec_norm = (spec_max - np.min(spec_max)) / np.ptp(spec_max) spec_norm = (spec_max - np.min(spec_max)) / np.ptp(spec_max)
return spec_norm # Resize cumsum of spec norm to our desired number of interpolation frames
x_norm = np.linspace(0, spec_norm.shape[-1], spec_norm.shape[-1])
y_norm = np.cumsum(spec_norm)
y_norm /= y_norm[-1]
x_resize = np.linspace(0, y_norm.shape[-1], int(duration*fps))
T = np.interp(x_resize, x_norm, y_norm)
def get_timesteps_arr(audio_filepath, offset, duration, fps=30, margin=(1.0, 5.0)): # Apply smoothing
"""Get the array that will be used to determine how much to interpolate between images. return T * (1 - smooth) + np.linspace(0.0, 1.0, T.shape[0]) * smooth
Normally, this is just a linspace between 0 and 1 for the number of frames to generate. In this case,
we want to use the amplitude of the audio to determine how much to interpolate between images.
So, here we:
1. Load the audio file
2. Split the audio into harmonic and percussive components
3. Get the normalized amplitude of the percussive component, resized to the number of frames
4. Get the cumulative sum of the amplitude array
5. Normalize the cumulative sum between 0 and 1
6. Return the array
I honestly have no clue what I'm doing here. Suggestions welcome.
"""
y, sr = librosa.load(audio_filepath, offset=offset, duration=duration)
wav_harmonic, wav_percussive = librosa.effects.hpss(y, margin=margin)
# Apparently n_mels is supposed to be input shape but I don't think it matters here?
frame_duration = int(sr / fps)
wav_norm = get_spec_norm(wav_percussive, sr, n_mels=512, hop_length=frame_duration)
amplitude_arr = np.resize(wav_norm, int(duration * fps))
T = np.cumsum(amplitude_arr)
T /= T[-1]
T[0] = 0.0
return T
def slerp(t, v0, v1, DOT_THRESHOLD=0.9995): def slerp(t, v0, v1, DOT_THRESHOLD=0.9995):
@ -130,7 +109,6 @@ def make_video_pyav(
frame = pil_to_tensor(Image.open(img)).unsqueeze(0) frame = pil_to_tensor(Image.open(img)).unsqueeze(0)
frames = frame if frames is None else torch.cat([frames, frame]) frames = frame if frames is None else torch.cat([frames, frame])
else: else:
frames = frames_or_frame_dir frames = frames_or_frame_dir
# TCHW -> THWC # TCHW -> THWC
@ -208,6 +186,16 @@ class StableDiffusionWalkPipeline(DiffusionPipeline):
new_config["steps_offset"] = 1 new_config["steps_offset"] = 1
scheduler._internal_dict = FrozenDict(new_config) scheduler._internal_dict = FrozenDict(new_config)
if safety_checker is None:
logger.warn(
f"You have disabled the safety checker for {self.__class__} by passing `safety_checker=None`. Ensure"
" that you abide to the conditions of the Stable Diffusion license and do not expose unfiltered"
" results in services or applications open to the public. Both the diffusers team and Hugging Face"
" strongly recommend to keep the safety filter enabled in all public facing circumstances, disabling"
" it only for use-cases that involve analyzing network behavior or auditing its results. For more"
" information, please have a look at https://github.com/huggingface/diffusers/pull/254 ."
)
self.register_modules( self.register_modules(
vae=vae, vae=vae,
text_encoder=text_encoder, text_encoder=text_encoder,
@ -251,6 +239,8 @@ class StableDiffusionWalkPipeline(DiffusionPipeline):
width: int = 512, width: int = 512,
num_inference_steps: int = 50, num_inference_steps: int = 50,
guidance_scale: float = 7.5, guidance_scale: float = 7.5,
negative_prompt: Optional[Union[str, List[str]]] = None,
num_images_per_prompt: Optional[int] = 1,
eta: float = 0.0, eta: float = 0.0,
generator: Optional[torch.Generator] = None, generator: Optional[torch.Generator] = None,
latents: Optional[torch.FloatTensor] = None, latents: Optional[torch.FloatTensor] = None,
@ -259,12 +249,13 @@ class StableDiffusionWalkPipeline(DiffusionPipeline):
callback: Optional[Callable[[int, int, torch.FloatTensor], None]] = None, callback: Optional[Callable[[int, int, torch.FloatTensor], None]] = None,
callback_steps: Optional[int] = 1, callback_steps: Optional[int] = 1,
text_embeddings: Optional[torch.FloatTensor] = None, text_embeddings: Optional[torch.FloatTensor] = None,
**kwargs,
): ):
r""" r"""
Function invoked when calling the pipeline for generation. Function invoked when calling the pipeline for generation.
Args: Args:
prompt (`str` or `List[str]`): prompt (`str` or `List[str]`, *optional*, defaults to `None`):
The prompt or prompts to guide the image generation. The prompt or prompts to guide the image generation. If not provided, `text_embeddings` is required.
height (`int`, *optional*, defaults to 512): height (`int`, *optional*, defaults to 512):
The height in pixels of the generated image. The height in pixels of the generated image.
width (`int`, *optional*, defaults to 512): width (`int`, *optional*, defaults to 512):
@ -278,6 +269,11 @@ class StableDiffusionWalkPipeline(DiffusionPipeline):
Paper](https://arxiv.org/pdf/2205.11487.pdf). Guidance scale is enabled by setting `guidance_scale > Paper](https://arxiv.org/pdf/2205.11487.pdf). Guidance scale is enabled by setting `guidance_scale >
1`. Higher guidance scale encourages to generate images that are closely linked to the text `prompt`, 1`. Higher guidance scale encourages to generate images that are closely linked to the text `prompt`,
usually at the expense of lower image quality. usually at the expense of lower image quality.
negative_prompt (`str` or `List[str]`, *optional*):
The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored
if `guidance_scale` is less than `1`).
num_images_per_prompt (`int`, *optional*, defaults to 1):
The number of images to generate per prompt.
eta (`float`, *optional*, defaults to 0.0): eta (`float`, *optional*, defaults to 0.0):
Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to
[`schedulers.DDIMScheduler`], will be ignored for others. [`schedulers.DDIMScheduler`], will be ignored for others.
@ -300,8 +296,10 @@ class StableDiffusionWalkPipeline(DiffusionPipeline):
callback_steps (`int`, *optional*, defaults to 1): callback_steps (`int`, *optional*, defaults to 1):
The frequency at which the `callback` function will be called. If not specified, the callback will be The frequency at which the `callback` function will be called. If not specified, the callback will be
called at every step. called at every step.
text_embeddings(`torch.FloatTensor`, *optional*): text_embeddings (`torch.FloatTensor`, *optional*, defaults to `None`):
Pre-generated text embeddings. Pre-generated text embeddings to be used as inputs for image generation. Can be used in place of
`prompt` to avoid re-computing the embeddings. If not provided, the embeddings will be generated from
the supplied `prompt`.
Returns: Returns:
[`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] or `tuple`: [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] or `tuple`:
[`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] if `return_dict` is True, otherwise a `tuple. [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] if `return_dict` is True, otherwise a `tuple.
@ -340,7 +338,7 @@ class StableDiffusionWalkPipeline(DiffusionPipeline):
if text_input_ids.shape[-1] > self.tokenizer.model_max_length: if text_input_ids.shape[-1] > self.tokenizer.model_max_length:
removed_text = self.tokenizer.batch_decode(text_input_ids[:, self.tokenizer.model_max_length :]) removed_text = self.tokenizer.batch_decode(text_input_ids[:, self.tokenizer.model_max_length :])
logger.warning( print(
"The following part of your input was truncated because CLIP can only handle sequences up to" "The following part of your input was truncated because CLIP can only handle sequences up to"
f" {self.tokenizer.model_max_length} tokens: {removed_text}" f" {self.tokenizer.model_max_length} tokens: {removed_text}"
) )
@ -349,21 +347,51 @@ class StableDiffusionWalkPipeline(DiffusionPipeline):
else: else:
batch_size = text_embeddings.shape[0] batch_size = text_embeddings.shape[0]
# duplicate text embeddings for each generation per prompt, using mps friendly method
bs_embed, seq_len, _ = text_embeddings.shape
text_embeddings = text_embeddings.repeat(1, num_images_per_prompt, 1)
text_embeddings = text_embeddings.view(bs_embed * num_images_per_prompt, seq_len, -1)
# here `guidance_scale` is defined analog to the guidance weight `w` of equation (2) # here `guidance_scale` is defined analog to the guidance weight `w` of equation (2)
# of the Imagen paper: https://arxiv.org/pdf/2205.11487.pdf . `guidance_scale = 1` # of the Imagen paper: https://arxiv.org/pdf/2205.11487.pdf . `guidance_scale = 1`
# corresponds to doing no classifier free guidance. # corresponds to doing no classifier free guidance.
do_classifier_free_guidance = guidance_scale > 1.0 do_classifier_free_guidance = guidance_scale > 1.0
# get unconditional embeddings for classifier free guidance # get unconditional embeddings for classifier free guidance
if do_classifier_free_guidance: if do_classifier_free_guidance:
# HACK - Not setting text_input_ids here when walking, so hard coding to max length of tokenizer uncond_tokens: List[str]
# TODO - Determine if this is OK to do if negative_prompt is None:
# max_length = text_input_ids.shape[-1] uncond_tokens = [""]
elif type(prompt) is not type(negative_prompt):
raise TypeError(
f"`negative_prompt` should be the same type to `prompt`, but got {type(negative_prompt)} !="
f" {type(prompt)}."
)
elif isinstance(negative_prompt, str):
uncond_tokens = [negative_prompt]
elif batch_size != len(negative_prompt):
raise ValueError(
f"`negative_prompt`: {negative_prompt} has batch size {len(negative_prompt)}, but `prompt`:"
f" {prompt} has batch size {batch_size}. Please make sure that passed `negative_prompt` matches"
" the batch size of `prompt`."
)
else:
uncond_tokens = negative_prompt
max_length = self.tokenizer.model_max_length max_length = self.tokenizer.model_max_length
uncond_input = self.tokenizer( uncond_input = self.tokenizer(
[""] * batch_size, padding="max_length", max_length=max_length, return_tensors="pt" uncond_tokens,
padding="max_length",
max_length=max_length,
truncation=True,
return_tensors="pt",
) )
uncond_embeddings = self.text_encoder(uncond_input.input_ids.to(self.device))[0] uncond_embeddings = self.text_encoder(uncond_input.input_ids.to(self.device))[0]
# duplicate unconditional embeddings for each generation per prompt, using mps friendly method
seq_len = uncond_embeddings.shape[1]
uncond_embeddings = uncond_embeddings.repeat(batch_size, num_images_per_prompt, 1)
uncond_embeddings = uncond_embeddings.view(batch_size * num_images_per_prompt, seq_len, -1)
# For classifier free guidance, we need to do two forward passes. # For classifier free guidance, we need to do two forward passes.
# Here we concatenate the unconditional and text embeddings into a single batch # Here we concatenate the unconditional and text embeddings into a single batch
# to avoid doing two forward passes # to avoid doing two forward passes
@ -374,19 +402,20 @@ class StableDiffusionWalkPipeline(DiffusionPipeline):
# Unlike in other pipelines, latents need to be generated in the target device # Unlike in other pipelines, latents need to be generated in the target device
# for 1-to-1 results reproducibility with the CompVis implementation. # for 1-to-1 results reproducibility with the CompVis implementation.
# However this currently doesn't work in `mps`. # However this currently doesn't work in `mps`.
latents_device = "cpu" if self.device.type == "mps" else self.device latents_shape = (batch_size * num_images_per_prompt, self.unet.in_channels, height // 8, width // 8)
latents_shape = (batch_size, self.unet.in_channels, height // 8, width // 8) latents_dtype = text_embeddings.dtype
if latents is None: if latents is None:
latents = torch.randn( if self.device.type == "mps":
latents_shape, # randn does not exist on mps
generator=generator, latents = torch.randn(latents_shape, generator=generator, device="cpu", dtype=latents_dtype).to(
device=latents_device, self.device
dtype=text_embeddings.dtype,
) )
else:
latents = torch.randn(latents_shape, generator=generator, device=self.device, dtype=latents_dtype)
else: else:
if latents.shape != latents_shape: if latents.shape != latents_shape:
raise ValueError(f"Unexpected latents shape, got {latents.shape}, expected {latents_shape}") raise ValueError(f"Unexpected latents shape, got {latents.shape}, expected {latents_shape}")
latents = latents.to(latents_device) latents = latents.to(self.device)
# set timesteps # set timesteps
self.scheduler.set_timesteps(num_inference_steps) self.scheduler.set_timesteps(num_inference_steps)
@ -431,12 +460,19 @@ class StableDiffusionWalkPipeline(DiffusionPipeline):
image = self.vae.decode(latents).sample image = self.vae.decode(latents).sample
image = (image / 2 + 0.5).clamp(0, 1) image = (image / 2 + 0.5).clamp(0, 1)
image = image.cpu().permute(0, 2, 3, 1).numpy()
safety_checker_input = self.feature_extractor(self.numpy_to_pil(image), return_tensors="pt").to(self.device) # we always cast to float32 as this does not cause significant overhead and is compatible with bfloa16
image = image.cpu().permute(0, 2, 3, 1).float().numpy()
if self.safety_checker is not None:
safety_checker_input = self.feature_extractor(self.numpy_to_pil(image), return_tensors="pt").to(
self.device
)
image, has_nsfw_concept = self.safety_checker( image, has_nsfw_concept = self.safety_checker(
images=image, clip_input=safety_checker_input.pixel_values.to(text_embeddings.dtype) images=image, clip_input=safety_checker_input.pixel_values.to(text_embeddings.dtype)
) )
else:
has_nsfw_concept = None
if output_type == "pil": if output_type == "pil":
image = self.numpy_to_pil(image) image = self.numpy_to_pil(image)
@ -449,16 +485,9 @@ class StableDiffusionWalkPipeline(DiffusionPipeline):
def generate_inputs(self, prompt_a, prompt_b, seed_a, seed_b, noise_shape, T, batch_size): def generate_inputs(self, prompt_a, prompt_b, seed_a, seed_b, noise_shape, T, batch_size):
embeds_a = self.embed_text(prompt_a) embeds_a = self.embed_text(prompt_a)
embeds_b = self.embed_text(prompt_b) embeds_b = self.embed_text(prompt_b)
latents_a = torch.randn(
noise_shape, latents_a = self.init_noise(seed_a, noise_shape)
device=self.device, latents_b = self.init_noise(seed_b, noise_shape)
generator=torch.Generator(device=self.device).manual_seed(seed_a),
)
latents_b = torch.randn(
noise_shape,
device=self.device,
generator=torch.Generator(device=self.device).manual_seed(seed_b),
)
batch_idx = 0 batch_idx = 0
embeds_batch, noise_batch = None, None embeds_batch, noise_batch = None, None
@ -477,7 +506,7 @@ class StableDiffusionWalkPipeline(DiffusionPipeline):
torch.cuda.empty_cache() torch.cuda.empty_cache()
embeds_batch, noise_batch = None, None embeds_batch, noise_batch = None, None
def generate_interpolation_clip( def make_clip_frames(
self, self,
prompt_a: str, prompt_a: str,
prompt_b: str, prompt_b: str,
@ -530,7 +559,7 @@ class StableDiffusionWalkPipeline(DiffusionPipeline):
eta=eta, eta=eta,
num_inference_steps=num_inference_steps, num_inference_steps=num_inference_steps,
output_type="pil" if not upsample else "numpy", output_type="pil" if not upsample else "numpy",
)["sample"] )["images"]
for image in outputs: for image in outputs:
frame_filepath = save_path / (f"frame%06d{image_file_ext}" % frame_index) frame_filepath = save_path / (f"frame%06d{image_file_ext}" % frame_index)
@ -557,6 +586,8 @@ class StableDiffusionWalkPipeline(DiffusionPipeline):
resume: Optional[bool] = False, resume: Optional[bool] = False,
audio_filepath: str = None, audio_filepath: str = None,
audio_start_sec: Optional[Union[int, float]] = None, audio_start_sec: Optional[Union[int, float]] = None,
margin: Optional[float] = 1.0,
smooth: Optional[float] = 0.0,
): ):
"""Generate a video from a sequence of prompts and seeds. Optionally, add audio to the """Generate a video from a sequence of prompts and seeds. Optionally, add audio to the
video to interpolate to the intensity of the audio. video to interpolate to the intensity of the audio.
@ -603,13 +634,17 @@ class StableDiffusionWalkPipeline(DiffusionPipeline):
Optional path to an audio file to influence the interpolation rate. Optional path to an audio file to influence the interpolation rate.
audio_start_sec (Optional[Union[int, float]], *optional*, defaults to 0): audio_start_sec (Optional[Union[int, float]], *optional*, defaults to 0):
Global start time of the provided audio_filepath. Global start time of the provided audio_filepath.
margin (Optional[float], *optional*, defaults to 1.0):
Margin from librosa hpss to use for audio interpolation.
smooth (Optional[float], *optional*, defaults to 0.0):
Smoothness of the audio interpolation. 1.0 means linear interpolation.
This function will create sub directories for each prompt and seed pair. This function will create sub directories for each prompt and seed pair.
For example, if you provide the following prompts and seeds: For example, if you provide the following prompts and seeds:
``` ```
prompts = ['a', 'b', 'c'] prompts = ['a dog', 'a cat', 'a bird']
seeds = [1, 2, 3] seeds = [1, 2, 3]
num_interpolation_steps = 5 num_interpolation_steps = 5
output_dir = 'output_dir' output_dir = 'output_dir'
@ -722,7 +757,7 @@ class StableDiffusionWalkPipeline(DiffusionPipeline):
audio_offset = audio_start_sec + sum(num_interpolation_steps[:i]) / fps audio_offset = audio_start_sec + sum(num_interpolation_steps[:i]) / fps
audio_duration = num_step / fps audio_duration = num_step / fps
self.generate_interpolation_clip( self.make_clip_frames(
prompt_a, prompt_a,
prompt_b, prompt_b,
seed_a, seed_a,
@ -742,7 +777,8 @@ class StableDiffusionWalkPipeline(DiffusionPipeline):
offset=audio_offset, offset=audio_offset,
duration=audio_duration, duration=audio_duration,
fps=fps, fps=fps,
margin=(1.0, 5.0), margin=margin,
smooth=smooth,
) )
if audio_filepath if audio_filepath
else None, else None,
@ -783,6 +819,23 @@ class StableDiffusionWalkPipeline(DiffusionPipeline):
embed = self.text_encoder(text_input.input_ids.to(self.device))[0] embed = self.text_encoder(text_input.input_ids.to(self.device))[0]
return embed return embed
def init_noise(self, seed, noise_shape):
"""Helper to initialize noise"""
# randn does not exist on mps, so we create noise on CPU here and move it to the device after initialization
if self.device.type == "mps":
noise = torch.randn(
noise_shape,
device='cpu',
generator=torch.Generator(device='cpu').manual_seed(seed),
).to(self.device)
else:
noise = torch.randn(
noise_shape,
device=self.device,
generator=torch.Generator(device=self.device).manual_seed(seed),
)
return noise
@classmethod @classmethod
def from_pretrained(cls, *args, tiled=False, **kwargs): def from_pretrained(cls, *args, tiled=False, **kwargs):
"""Same as diffusers `from_pretrained` but with tiled option, which makes images tilable""" """Same as diffusers `from_pretrained` but with tiled option, which makes images tilable"""
@ -799,15 +852,6 @@ class StableDiffusionWalkPipeline(DiffusionPipeline):
patch_conv(padding_mode="circular") patch_conv(padding_mode="circular")
return super().from_pretrained(*args, **kwargs) pipeline = super().from_pretrained(*args, **kwargs)
pipeline.tiled = tiled
return pipeline
class NoCheck(ModelMixin):
"""Can be used in place of safety checker. Use responsibly and at your own risk."""
def __init__(self):
super().__init__()
self.register_parameter(name="asdf", param=torch.nn.Parameter(torch.randn(3)))
def forward(self, images=None, **kwargs):
return images, [False]

View File

@ -1,6 +1,6 @@
# This file is part of stable-diffusion-webui (https://github.com/sd-webui/stable-diffusion-webui/). # This file is part of sygil-webui (https://github.com/Sygil-Dev/sygil-webui/).
# Copyright 2022 sd-webui team. # Copyright 2022 Sygil-Dev team.
# This program is free software: you can redistribute it and/or modify # This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as published by # it under the terms of the GNU Affero General Public License as published by
# the Free Software Foundation, either version 3 of the License, or # the Free Software Foundation, either version 3 of the License, or
@ -28,7 +28,7 @@ from transformers import CLIPTextModel, CLIPTokenizer
import argparse import argparse
import itertools import itertools
import math import math
import os import os, sys
import random import random
#import datetime #import datetime
#from pathlib import Path #from pathlib import Path
@ -937,4 +937,3 @@ def layout():
# Start TensorBoard # Start TensorBoard
st_tensorboard(logdir=os.path.join("outputs", "textual_inversion"), port=8888) st_tensorboard(logdir=os.path.join("outputs", "textual_inversion"), port=8888)

View File

@ -1,6 +1,6 @@
# This file is part of stable-diffusion-webui (https://github.com/sd-webui/stable-diffusion-webui/). # This file is part of sygil-webui (https://github.com/Sygil-Dev/sygil-webui/).
# Copyright 2022 sd-webui team. # Copyright 2022 Sygil-Dev team.
# This program is free software: you can redistribute it and/or modify # This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as published by # it under the terms of the GNU Affero General Public License as published by
# the Free Software Foundation, either version 3 of the License, or # the Free Software Foundation, either version 3 of the License, or
@ -25,16 +25,19 @@ from streamlit.elements.image import image_to_url
#other imports #other imports
import uuid import uuid
from typing import Union
from ldm.models.diffusion.ddim import DDIMSampler from ldm.models.diffusion.ddim import DDIMSampler
from ldm.models.diffusion.plms import PLMSSampler from ldm.models.diffusion.plms import PLMSSampler
# streamlit components
from custom_components import sygil_suggestions
# Temp imports # Temp imports
# end of imports # end of imports
#--------------------------------------------------------------------------------------------------------------- #---------------------------------------------------------------------------------------------------------------
sygil_suggestions.init()
try: try:
# this silences the annoying "Some weights of the model checkpoint were not used when initializing..." message at start. # this silences the annoying "Some weights of the model checkpoint were not used when initializing..." message at start.
@ -103,7 +106,7 @@ def stable_horde(outpath, prompt, seed, sampler_name, save_grid, batch_size,
log.append("Generating image with Stable Horde.") log.append("Generating image with Stable Horde.")
st.session_state["progress_bar_text"].code('\n'.join(log), language='') st.session_state["progress_bar_text"].code('\n'.join(str(log)), language='')
# start time after garbage collection (or before?) # start time after garbage collection (or before?)
start_time = time.time() start_time = time.time()
@ -144,7 +147,7 @@ def stable_horde(outpath, prompt, seed, sampler_name, save_grid, batch_size,
headers = {"apikey": api_key} headers = {"apikey": api_key}
logger.debug(final_submit_dict) logger.debug(final_submit_dict)
st.session_state["progress_bar_text"].code('\n'.join(log), language='') st.session_state["progress_bar_text"].code('\n'.join(str(log)), language='')
horde_url = "https://stablehorde.net" horde_url = "https://stablehorde.net"
@ -154,7 +157,7 @@ def stable_horde(outpath, prompt, seed, sampler_name, save_grid, batch_size,
logger.debug(submit_results) logger.debug(submit_results)
log.append(submit_results) log.append(submit_results)
st.session_state["progress_bar_text"].code('\n'.join(log), language='') st.session_state["progress_bar_text"].code('\n'.join(str(log)), language='')
req_id = submit_results['id'] req_id = submit_results['id']
is_done = False is_done = False
@ -227,7 +230,7 @@ def stable_horde(outpath, prompt, seed, sampler_name, save_grid, batch_size,
save_grid=save_grid, save_grid=save_grid,
sort_samples=sampler_name, sampler_name=sampler_name, ddim_eta=ddim_eta, n_iter=n_iter, sort_samples=sampler_name, sampler_name=sampler_name, ddim_eta=ddim_eta, n_iter=n_iter,
batch_size=batch_size, i=iter, save_individual_images=save_individual_images, batch_size=batch_size, i=iter, save_individual_images=save_individual_images,
model_name="Stable Diffusion v1.4") model_name="Stable Diffusion v1.5")
output_images.append(img) output_images.append(img)
@ -402,10 +405,12 @@ def layout():
with input_col1: with input_col1:
#prompt = st.text_area("Input Text","") #prompt = st.text_area("Input Text","")
prompt = st.text_area("Input Text","", placeholder="A corgi wearing a top hat as an oil painting.") placeholder = "A corgi wearing a top hat as an oil painting."
prompt = st.text_area("Input Text","", placeholder=placeholder, height=54)
sygil_suggestions.suggestion_area(placeholder)
# creating the page layout using columns # creating the page layout using columns
col1, col2, col3 = st.columns([1,2,1], gap="large") col1, col2, col3 = st.columns([2,5,2], gap="large")
with col1: with col1:
width = st.slider("Width:", min_value=st.session_state['defaults'].txt2img.width.min_value, max_value=st.session_state['defaults'].txt2img.width.max_value, width = st.slider("Width:", min_value=st.session_state['defaults'].txt2img.width.min_value, max_value=st.session_state['defaults'].txt2img.width.max_value,
@ -442,7 +447,7 @@ def layout():
st.session_state["update_preview"] = st.session_state["defaults"].general.update_preview st.session_state["update_preview"] = st.session_state["defaults"].general.update_preview
st.session_state["update_preview_frequency"] = st.number_input("Update Image Preview Frequency", st.session_state["update_preview_frequency"] = st.number_input("Update Image Preview Frequency",
min_value=1, min_value=0,
value=st.session_state['defaults'].txt2img.update_preview_frequency, value=st.session_state['defaults'].txt2img.update_preview_frequency,
help="Frequency in steps at which the the preview image is updated. By default the frequency \ help="Frequency in steps at which the the preview image is updated. By default the frequency \
is set to 10 step.") is set to 10 step.")
@ -483,7 +488,7 @@ def layout():
help="Select the model you want to use. This option is only available if you have custom models \ help="Select the model you want to use. This option is only available if you have custom models \
on your 'models/custom' folder. The model name that will be shown here is the same as the name\ on your 'models/custom' folder. The model name that will be shown here is the same as the name\
the file for the model has on said folder, it is recommended to give the .ckpt file a name that \ the file for the model has on said folder, it is recommended to give the .ckpt file a name that \
will make it easier for you to distinguish it from other models. Default: Stable Diffusion v1.4") will make it easier for you to distinguish it from other models. Default: Stable Diffusion v1.5")
st.session_state.sampling_steps = st.number_input("Sampling Steps", value=st.session_state.defaults.txt2img.sampling_steps.value, st.session_state.sampling_steps = st.number_input("Sampling Steps", value=st.session_state.defaults.txt2img.sampling_steps.value,
min_value=st.session_state.defaults.txt2img.sampling_steps.min_value, min_value=st.session_state.defaults.txt2img.sampling_steps.min_value,

File diff suppressed because it is too large Load Diff

View File

@ -1,6 +1,6 @@
# This file is part of stable-diffusion-webui (https://github.com/sd-webui/stable-diffusion-webui/). # This file is part of sygil-webui (https://github.com/Sygil-Dev/sygil-webui/).
# Copyright 2022 sd-webui team. # Copyright 2022 Sygil-Dev team.
# This program is free software: you can redistribute it and/or modify # This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as published by # it under the terms of the GNU Affero General Public License as published by
# the Free Software Foundation, either version 3 of the License, or # the Free Software Foundation, either version 3 of the License, or

View File

@ -1,6 +1,6 @@
# This file is part of stable-diffusion-webui (https://github.com/sd-webui/stable-diffusion-webui/). # This file is part of sygil-webui (https://github.com/Sygil-Dev/sygil-webui/).
# Copyright 2022 sd-webui team. # Copyright 2022 Sygil-Dev team.
# This program is free software: you can redistribute it and/or modify # This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as published by # it under the terms of the GNU Affero General Public License as published by
# the Free Software Foundation, either version 3 of the License, or # the Free Software Foundation, either version 3 of the License, or
@ -20,13 +20,14 @@
# We import hydralit like this to replace the previous stuff # We import hydralit like this to replace the previous stuff
# we had with native streamlit as it lets ur replace things 1:1 # we had with native streamlit as it lets ur replace things 1:1
#import hydralit as st #import hydralit as st
import collections.abc
from sd_utils import * from sd_utils import *
# streamlit imports # streamlit imports
import streamlit_nested_layout import streamlit_nested_layout
#streamlit components section #streamlit components section
from st_on_hover_tabs import on_hover_tabs #from st_on_hover_tabs import on_hover_tabs
from streamlit_server_state import server_state, server_state_lock from streamlit_server_state import server_state, server_state_lock
#other imports #other imports
@ -37,6 +38,8 @@ import k_diffusion as K
from omegaconf import OmegaConf from omegaconf import OmegaConf
import argparse import argparse
# import custom components
from custom_components import draggable_number_input
# end of imports # end of imports
#--------------------------------------------------------------------------------------------------------------- #---------------------------------------------------------------------------------------------------------------
@ -107,7 +110,7 @@ def load_css(isLocal, nameOrURL):
def layout(): def layout():
"""Layout functions to define all the streamlit layout here.""" """Layout functions to define all the streamlit layout here."""
if not st.session_state["defaults"].debug.enable_hydralit: if not st.session_state["defaults"].debug.enable_hydralit:
st.set_page_config(page_title="Stable Diffusion Playground", layout="wide") st.set_page_config(page_title="Stable Diffusion Playground", layout="wide", initial_sidebar_state="collapsed")
#app = st.HydraApp(title='Stable Diffusion WebUI', favicon="", sidebar_state="expanded", layout="wide", #app = st.HydraApp(title='Stable Diffusion WebUI', favicon="", sidebar_state="expanded", layout="wide",
#hide_streamlit_markers=False, allow_url_nav=True , clear_cross_app_sessions=False) #hide_streamlit_markers=False, allow_url_nav=True , clear_cross_app_sessions=False)
@ -116,6 +119,39 @@ def layout():
# load css as an external file, function has an option to local or remote url. Potential use when running from cloud infra that might not have access to local path. # load css as an external file, function has an option to local or remote url. Potential use when running from cloud infra that might not have access to local path.
load_css(True, 'frontend/css/streamlit.main.css') load_css(True, 'frontend/css/streamlit.main.css')
#
# specify the primary menu definition
menu_data = [
{'id': 'Stable Diffusion', 'label': 'Stable Diffusion', 'icon': 'bi bi-grid-1x2-fill'},
{'id': 'Textual Inversion', 'label': 'Textual Inversion', 'icon': 'bi bi-lightbulb-fill'},
{'id': 'Model Manager', 'label': 'Model Manager', 'icon': 'bi bi-cloud-arrow-down-fill'},
{'id': 'Tools','label':"Tools", 'icon': "bi bi-tools", 'submenu':[
{'id': 'API Server', 'label': 'API Server', 'icon': 'bi bi-server'},
#{'id': 'Barfi/BaklavaJS', 'label': 'Barfi/BaklavaJS', 'icon': 'bi bi-diagram-3-fill'},
#{'id': 'API Server', 'label': 'API Server', 'icon': 'bi bi-server'},
]},
{'id': 'Settings', 'label': 'Settings', 'icon': 'bi bi-gear-fill'},
#{'icon': "fa-solid fa-radar",'label':"Dropdown1", 'submenu':[
# {'id':' subid11','icon': "fa fa-paperclip", 'label':"Sub-item 1"},{'id':'subid12','icon': "💀", 'label':"Sub-item 2"},{'id':'subid13','icon': "fa fa-database", 'label':"Sub-item 3"}]},
#{'icon': "far fa-chart-bar", 'label':"Chart"},#no tooltip message
#{'id':' Crazy return value 💀','icon': "💀", 'label':"Calendar"},
#{'icon': "fas fa-tachometer-alt", 'label':"Dashboard",'ttip':"I'm the Dashboard tooltip!"}, #can add a tooltip message
#{'icon': "far fa-copy", 'label':"Right End"},
#{'icon': "fa-solid fa-radar",'label':"Dropdown2", 'submenu':[{'label':"Sub-item 1", 'icon': "fa fa-meh"},{'label':"Sub-item 2"},{'icon':'🙉','label':"Sub-item 3",}]},
]
over_theme = {'txc_inactive': '#FFFFFF', "menu_background":'#000000'}
menu_id = hc.nav_bar(
menu_definition=menu_data,
#home_name='Home',
#login_name='Logout',
hide_streamlit_markers=False,
override_theme=over_theme,
sticky_nav=True,
sticky_mode='pinned',
)
# check if the models exist on their respective folders # check if the models exist on their respective folders
with server_state_lock["GFPGAN_available"]: with server_state_lock["GFPGAN_available"]:
if os.path.exists(os.path.join(st.session_state["defaults"].general.GFPGAN_dir, f"{st.session_state['defaults'].general.GFPGAN_model}.pth")): if os.path.exists(os.path.join(st.session_state["defaults"].general.GFPGAN_dir, f"{st.session_state['defaults'].general.GFPGAN_model}.pth")):
@ -129,19 +165,23 @@ def layout():
else: else:
server_state["RealESRGAN_available"] = False server_state["RealESRGAN_available"] = False
with st.sidebar: #with st.sidebar:
tabs = on_hover_tabs(tabName=['Stable Diffusion', "Textual Inversion","Model Manager","Settings"], #page = on_hover_tabs(tabName=['Stable Diffusion', "Textual Inversion","Model Manager","Settings"],
iconName=['dashboard','model_training' ,'cloud_download', 'settings'], default_choice=0) #iconName=['dashboard','model_training' ,'cloud_download', 'settings'], default_choice=0)
# need to see how to get the icons to show for the hydralit option_bar # need to see how to get the icons to show for the hydralit option_bar
#tabs = hc.option_bar([{'icon':'grid-outline','label':'Stable Diffusion'}, {'label':"Textual Inversion"}, #page = hc.option_bar([{'icon':'grid-outline','label':'Stable Diffusion'}, {'label':"Textual Inversion"},
#{'label':"Model Manager"},{'label':"Settings"}], #{'label':"Model Manager"},{'label':"Settings"}],
#horizontal_orientation=False, #horizontal_orientation=False,
#override_theme={'txc_inactive': 'white','menu_background':'#111', 'stVerticalBlock': '#111','txc_active':'yellow','option_active':'blue'}) #override_theme={'txc_inactive': 'white','menu_background':'#111', 'stVerticalBlock': '#111','txc_active':'yellow','option_active':'blue'})
if tabs =='Stable Diffusion': #
#if menu_id == "Home":
#st.info("Under Construction. :construction_worker:")
if menu_id == "Stable Diffusion":
# set the page url and title # set the page url and title
st.experimental_set_query_params(page='stable-diffusion') #st.experimental_set_query_params(page='stable-diffusion')
try: try:
set_page_title("Stable Diffusion Playground") set_page_title("Stable Diffusion Playground")
except NameError: except NameError:
@ -179,22 +219,35 @@ def layout():
layout() layout()
# #
elif tabs == 'Model Manager': elif menu_id == 'Model Manager':
set_page_title("Model Manager - Stable Diffusion Playground") set_page_title("Model Manager - Stable Diffusion Playground")
from ModelManager import layout from ModelManager import layout
layout() layout()
elif tabs == 'Textual Inversion': elif menu_id == 'Textual Inversion':
from textual_inversion import layout from textual_inversion import layout
layout() layout()
elif tabs == 'Settings': elif menu_id == 'API Server':
set_page_title("API Server - Stable Diffusion Playground")
from APIServer import layout
layout()
#elif menu_id == 'Barfi/BaklavaJS':
#set_page_title("Barfi/BaklavaJS - Stable Diffusion Playground")
#from barfi_baklavajs import layout
#layout()
elif menu_id == 'Settings':
set_page_title("Settings - Stable Diffusion Playground") set_page_title("Settings - Stable Diffusion Playground")
from Settings import layout from Settings import layout
layout() layout()
# calling dragable input component module at the end, so it works on all pages
draggable_number_input.load()
if __name__ == '__main__': if __name__ == '__main__':
set_logger_verbosity(opt.verbosity) set_logger_verbosity(opt.verbosity)

View File

@ -1,7 +1,7 @@
from setuptools import setup, find_packages from setuptools import setup, find_packages
setup( setup(
name='sd-webui', name='sygil-webui',
version='0.0.1', version='0.0.1',
description='', description='',
packages=find_packages(), packages=find_packages(),

15
streamlit_webview.py Normal file
View File

@ -0,0 +1,15 @@
import os, webview
from streamlit.web import bootstrap
from streamlit import config as _config
webview.create_window('Sygil', 'http://localhost:8501', width=1000, height=800, min_size=(500, 500))
webview.start()
dirname = os.path.dirname(__file__)
filename = os.path.join(dirname, 'scripts/webui_streamlit.py')
_config.set_option("server.headless", True)
args = []
#streamlit.cli.main_run(filename, args)
bootstrap.run(filename,'',args, flag_options={})

View File

@ -1,17 +1,17 @@
@echo off @echo off
:: This file is part of stable-diffusion-webui (https://github.com/sd-webui/stable-diffusion-webui/). :: This file is part of sygil-webui (https://github.com/Sygil-Dev/sygil-webui/).
::
:: Copyright 2022 sd-webui team. :: Copyright 2022 Sygil-Dev team.
:: This program is free software: you can redistribute it and/or modify :: This program is free software: you can redistribute it and/or modify
:: it under the terms of the GNU Affero General Public License as published by :: it under the terms of the GNU Affero General Public License as published by
:: the Free Software Foundation, either version 3 of the License, or :: the Free Software Foundation, either version 3 of the License, or
:: (at your option) any later version. :: (at your option) any later version.
::
:: This program is distributed in the hope that it will be useful, :: This program is distributed in the hope that it will be useful,
:: but WITHOUT ANY WARRANTY; without even the implied warranty of :: but WITHOUT ANY WARRANTY; without even the implied warranty of
:: MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the :: MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
:: GNU Affero General Public License for more details. :: GNU Affero General Public License for more details.
::
:: You should have received a copy of the GNU Affero General Public License :: You should have received a copy of the GNU Affero General Public License
:: along with this program. If not, see <http://www.gnu.org/licenses/>. :: along with this program. If not, see <http://www.gnu.org/licenses/>.
:: Run all commands using this script's directory as the working directory :: Run all commands using this script's directory as the working directory
@ -98,12 +98,11 @@ call "%v_conda_path%\Scripts\activate.bat" "%v_conda_env_name%"
:PROMPT :PROMPT
set SETUPTOOLS_USE_DISTUTILS=stdlib set SETUPTOOLS_USE_DISTUTILS=stdlib
IF EXIST "models\ldm\stable-diffusion-v1\model.ckpt" ( IF EXIST "models\ldm\stable-diffusion-v1\Stable Diffusion v1.5.ckpt" (
set "PYTHONPATH=%~dp0" python -m streamlit run scripts\webui_streamlit.py --theme.base dark --server.address localhost
python scripts\relauncher.py %*
) ELSE ( ) ELSE (
echo Your model file does not exist! Place it in 'models\ldm\stable-diffusion-v1' with the name 'model.ckpt'. echo Your model file does not exist! Once the WebUI launches please visit the Model Manager page and download the models by using the Download button for each model.
pause python -m streamlit run scripts\webui_streamlit.py --theme.base dark --server.address localhost
) )
::cmd /k ::cmd /k

View File

@ -1,7 +1,8 @@
#!/bin/bash -i #!/bin/bash -i
# This file is part of stable-diffusion-webui (https://github.com/sd-webui/stable-diffusion-webui/).
# Copyright 2022 sd-webui team. # This file is part of sygil-webui (https://github.com/Sygil-Dev/sygil-webui/).
# Copyright 2022 Sygil-Dev team.
# This program is free software: you can redistribute it and/or modify # This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as published by # it under the terms of the GNU Affero General Public License as published by
# the Free Software Foundation, either version 3 of the License, or # the Free Software Foundation, either version 3 of the License, or
@ -30,7 +31,7 @@ LSDR_CONFIG="https://heibox.uni-heidelberg.de/f/31a76b13ea27482981b4/?dl=1"
LSDR_MODEL="https://heibox.uni-heidelberg.de/f/578df07c8fc04ffbadf3/?dl=1" LSDR_MODEL="https://heibox.uni-heidelberg.de/f/578df07c8fc04ffbadf3/?dl=1"
REALESRGAN_MODEL="https://github.com/xinntao/Real-ESRGAN/releases/download/v0.1.0/RealESRGAN_x4plus.pth" REALESRGAN_MODEL="https://github.com/xinntao/Real-ESRGAN/releases/download/v0.1.0/RealESRGAN_x4plus.pth"
REALESRGAN_ANIME_MODEL="https://github.com/xinntao/Real-ESRGAN/releases/download/v0.2.2.4/RealESRGAN_x4plus_anime_6B.pth" REALESRGAN_ANIME_MODEL="https://github.com/xinntao/Real-ESRGAN/releases/download/v0.2.2.4/RealESRGAN_x4plus_anime_6B.pth"
SD_CONCEPT_REPO="https://github.com/sd-webui/sd-concepts-library/archive/refs/heads/main.zip" SD_CONCEPT_REPO="https://github.com/Sygil-Dev/sd-concepts-library/archive/refs/heads/main.zip"
if [[ -f $ENV_MODIFED_FILE ]]; then if [[ -f $ENV_MODIFED_FILE ]]; then
@ -85,22 +86,6 @@ conda_env_activation () {
conda info | grep active conda info | grep active
} }
# Check to see if the SD model already exists, if not then it creates it and prompts the user to add the SD AI models to the repo directory
sd_model_loading () {
if [ -f "$DIRECTORY/models/ldm/stable-diffusion-v1/model.ckpt" ]; then
printf "AI Model already in place. Continuing...\n\n"
else
printf "\n\n########## MOVE MODEL FILE ##########\n\n"
printf "Please download the 1.4 AI Model from Huggingface (or another source) and place it inside of the stable-diffusion-webui folder\n\n"
read -p "Once you have sd-v1-4.ckpt in the project root, Press Enter...\n\n"
# Check to make sure checksum of models is the original one from HuggingFace and not a fake model set
printf "fe4efff1e174c627256e44ec2991ba279b3816e364b49f9be2abc0b3ff3f8556 sd-v1-4.ckpt" | sha256sum --check || exit 1
mv sd-v1-4.ckpt $DIRECTORY/models/ldm/stable-diffusion-v1/model.ckpt
rm -r ./Models
fi
}
# Checks to see if the upscaling models exist in their correct locations. If they do not they will be downloaded as required # Checks to see if the upscaling models exist in their correct locations. If they do not they will be downloaded as required
post_processor_model_loading () { post_processor_model_loading () {
# Check to see if GFPGAN has been added yet, if not it will download it and place it in the proper directory # Check to see if GFPGAN has been added yet, if not it will download it and place it in the proper directory
@ -168,7 +153,7 @@ launch_webui () {
printf "Which Version of the WebUI Interface do you wish to use?\n" printf "Which Version of the WebUI Interface do you wish to use?\n"
select yn in "Streamlit" "Gradio"; do select yn in "Streamlit" "Gradio"; do
case $yn in case $yn in
Streamlit ) printf "\nStarting Stable Diffusion WebUI: Streamlit Interface. Please Wait...\n"; python -m streamlit run scripts/webui_streamlit.py; break;; Streamlit ) printf "\nStarting Stable Diffusion WebUI: Streamlit Interface. Please Wait...\n"; python -m streamlit run scripts/webui_streamlit.py --theme.base dark --server.address localhost; break;;
Gradio ) printf "\nStarting Stable Diffusion WebUI: Gradio Interface. Please Wait...\n"; python scripts/relauncher.py "$@"; break;; Gradio ) printf "\nStarting Stable Diffusion WebUI: Gradio Interface. Please Wait...\n"; python scripts/relauncher.py "$@"; break;;
esac esac
done done
@ -180,9 +165,9 @@ start_initialization () {
sd_model_loading sd_model_loading
post_processor_model_loading post_processor_model_loading
conda_env_activation conda_env_activation
if [ ! -e "models/ldm/stable-diffusion-v1/model.ckpt" ]; then if [ ! -e "models/ldm/stable-diffusion-v1/*.ckpt" ]; then
echo "Your model file does not exist! Place it in 'models/ldm/stable-diffusion-v1' with the name 'model.ckpt'." echo "Your model file does not exist! Streamlit will handle this automatically, however Gradio still requires this file be placed manually. If you intend to use the Gradio interface, place it in 'models/ldm/stable-diffusion-v1' with the name 'model.ckpt'."
exit 1 read -p "Once you have sd-v1-4.ckpt in the project root, if you are going to use Gradio, Press Enter...\n\n"
fi fi
launch_webui "$@" launch_webui "$@"

View File

@ -1,17 +1,17 @@
@echo off @echo off
:: This file is part of stable-diffusion-webui (https://github.com/sd-webui/stable-diffusion-webui/). :: This file is part of sygil-webui (https://github.com/Sygil-Dev/sygil-webui/).
::
:: Copyright 2022 sd-webui team. :: Copyright 2022 Sygil-Dev team.
:: This program is free software: you can redistribute it and/or modify :: This program is free software: you can redistribute it and/or modify
:: it under the terms of the GNU Affero General Public License as published by :: it under the terms of the GNU Affero General Public License as published by
:: the Free Software Foundation, either version 3 of the License, or :: the Free Software Foundation, either version 3 of the License, or
:: (at your option) any later version. :: (at your option) any later version.
::
:: This program is distributed in the hope that it will be useful, :: This program is distributed in the hope that it will be useful,
:: but WITHOUT ANY WARRANTY; without even the implied warranty of :: but WITHOUT ANY WARRANTY; without even the implied warranty of
:: MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the :: MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
:: GNU Affero General Public License for more details. :: GNU Affero General Public License for more details.
::
:: You should have received a copy of the GNU Affero General Public License :: You should have received a copy of the GNU Affero General Public License
:: along with this program. If not, see <http://www.gnu.org/licenses/>. :: along with this program. If not, see <http://www.gnu.org/licenses/>.
:: Run all commands using this script's directory as the working directory :: Run all commands using this script's directory as the working directory
@ -99,7 +99,8 @@ call "%v_conda_path%\Scripts\activate.bat" "%v_conda_env_name%"
:PROMPT :PROMPT
set SETUPTOOLS_USE_DISTUTILS=stdlib set SETUPTOOLS_USE_DISTUTILS=stdlib
IF EXIST "models\ldm\stable-diffusion-v1\model.ckpt" ( IF EXIST "models\ldm\stable-diffusion-v1\model.ckpt" (
python -m streamlit run scripts\webui_streamlit.py --theme.base dark set "PYTHONPATH=%~dp0"
python scripts\relauncher.py %*
) ELSE ( ) ELSE (
echo Your model file does not exist! Place it in 'models\ldm\stable-diffusion-v1' with the name 'model.ckpt'. echo Your model file does not exist! Place it in 'models\ldm\stable-diffusion-v1' with the name 'model.ckpt'.
pause pause