mirror of
https://github.com/sd-webui/stable-diffusion-webui.git
synced 2024-12-14 06:35:14 +03:00
Merge branch 'dev' into master
This commit is contained in:
commit
bff1bd0021
11
.idea/.gitignore
vendored
11
.idea/.gitignore
vendored
@ -1,11 +0,0 @@
|
||||
# Default ignored files
|
||||
/shelf/
|
||||
/workspace.xml
|
||||
# Editor-based HTTP Client requests
|
||||
/httpRequests/
|
||||
# Datasource local storage ignored files
|
||||
/dataSources/
|
||||
/dataSources.local.xml
|
||||
|
||||
*.pyc
|
||||
.idea
|
93
README.md
93
README.md
@ -1,8 +1,8 @@
|
||||
# <center>Web-based UI for Stable Diffusion</center>
|
||||
|
||||
## Created by [sd-webui](https://github.com/sd-webui)
|
||||
## Created by [Sygil.Dev](https://github.com/sygil-dev)
|
||||
|
||||
## [Visit sd-webui's Discord Server](https://discord.gg/gyXNe4NySY) [![Discord Server](https://user-images.githubusercontent.com/5977640/190528254-9b5b4423-47ee-4f24-b4f9-fd13fba37518.png)](https://discord.gg/gyXNe4NySY)
|
||||
## [Join us at Sygil.Dev's Discord Server](https://discord.gg/gyXNe4NySY) [![Discord Server](https://user-images.githubusercontent.com/5977640/190528254-9b5b4423-47ee-4f24-b4f9-fd13fba37518.png)](https://discord.gg/gyXNe4NySY)
|
||||
|
||||
## Installation instructions for:
|
||||
|
||||
@ -11,7 +11,7 @@
|
||||
|
||||
### Want to ask a question or request a feature?
|
||||
|
||||
Come to our [Discord Server](https://discord.gg/gyXNe4NySY) or use [Discussions](https://github.com/sd-webui/stable-diffusion-webui/discussions).
|
||||
Come to our [Discord Server](https://discord.gg/gyXNe4NySY) or use [Discussions](https://github.com/sygil-dev/stable-diffusion-webui/discussions).
|
||||
|
||||
## Documentation
|
||||
|
||||
@ -21,7 +21,7 @@ Come to our [Discord Server](https://discord.gg/gyXNe4NySY) or use [Discussions]
|
||||
|
||||
Check the [Contribution Guide](CONTRIBUTING.md)
|
||||
|
||||
[sd-webui](https://github.com/sd-webui) main devs:
|
||||
[sygil-dev](https://github.com/sygil-dev) main devs:
|
||||
|
||||
* ![hlky's avatar](https://avatars.githubusercontent.com/u/106811348?s=40&v=4) [hlky](https://github.com/hlky)
|
||||
* ![ZeroCool940711's avatar](https://avatars.githubusercontent.com/u/5977640?s=40&v=4)[ZeroCool940711](https://github.com/ZeroCool940711)
|
||||
@ -29,23 +29,15 @@ Check the [Contribution Guide](CONTRIBUTING.md)
|
||||
|
||||
### Project Features:
|
||||
|
||||
* Two great Web UI's to choose from: Streamlit or Gradio
|
||||
|
||||
* No more manually typing parameters, now all you have to do is write your prompt and adjust sliders
|
||||
|
||||
* Built-in image enhancers and upscalers, including GFPGAN and realESRGAN
|
||||
|
||||
* Generator Preview: See your image as its being made
|
||||
* Run additional upscaling models on CPU to save VRAM
|
||||
|
||||
* Textual inversion 🔥: [info](https://textual-inversion.github.io/) - requires enabling, see [here](https://github.com/hlky/sd-enable-textual-inversion), script works as usual without it enabled
|
||||
* Textual inversion: [Reaserch Paper](https://textual-inversion.github.io/)
|
||||
|
||||
* Advanced img2img editor with Mask and crop capabilities
|
||||
|
||||
* Mask painting 🖌️: Powerful tool for re-generating only specific parts of an image you want to change (currently Gradio only)
|
||||
|
||||
* More diffusion samplers 🔥🔥: A great collection of samplers to use, including:
|
||||
* K-Diffusion Samplers: A great collection of samplers to use, including:
|
||||
|
||||
- `k_euler` (Default)
|
||||
- `k_euler`
|
||||
- `k_lms`
|
||||
- `k_euler_a`
|
||||
- `k_dpm_2`
|
||||
@ -54,35 +46,31 @@ Check the [Contribution Guide](CONTRIBUTING.md)
|
||||
- `PLMS`
|
||||
- `DDIM`
|
||||
|
||||
* Loopback ➿: Automatically feed the last generated sample back into img2img
|
||||
* Loopback: Automatically feed the last generated sample back into img2img
|
||||
|
||||
* Prompt Weighting 🏋️: Adjust the strength of different terms in your prompt
|
||||
* Prompt Weighting & Negative Prompts: Gain more control over your creations
|
||||
|
||||
* Selectable GPU usage with `--gpu <id>`
|
||||
* Selectable GPU usage from Settings tab
|
||||
|
||||
* Memory Monitoring 🔥: Shows VRAM usage and generation time after outputting
|
||||
* Word Seeds: Use words instead of seed numbers
|
||||
|
||||
* Word Seeds 🔥: Use words instead of seed numbers
|
||||
* Automated Launcher: Activate conda and run Stable Diffusion with a single command
|
||||
|
||||
* CFG: Classifier free guidance scale, a feature for fine-tuning your output
|
||||
|
||||
* Automatic Launcher: Activate conda and run Stable Diffusion with a single command
|
||||
|
||||
* Lighter on VRAM: 512x512 Text2Image & Image2Image tested working on 4GB
|
||||
* Lighter on VRAM: 512x512 Text2Image & Image2Image tested working on 4GB (with *optimized* mode enabled in Settings)
|
||||
|
||||
* Prompt validation: If your prompt is too long, you will get a warning in the text output field
|
||||
|
||||
* Copy-paste generation parameters: A text output provides generation parameters in an easy to copy-paste form for easy sharing.
|
||||
|
||||
* Correct seeds for batches: If you use a seed of 1000 to generate two batches of two images each, four generated images will have seeds: `1000, 1001, 1002, 1003`.
|
||||
* Sequential seeds for batches: If you use a seed of 1000 to generate two batches of two images each, four generated images will have seeds: `1000, 1001, 1002, 1003`.
|
||||
|
||||
* Prompt matrix: Separate multiple prompts using the `|` character, and the system will produce an image for every combination of them.
|
||||
|
||||
* Loopback for Image2Image: A checkbox for img2img allowing to automatically feed output image as input for the next batch. Equivalent to saving output image, and replacing input image with it.
|
||||
* [Gradio] Advanced img2img editor with Mask and crop capabilities
|
||||
|
||||
# Stable Diffusion Web UI
|
||||
* [Gradio] Mask painting 🖌️: Powerful tool for re-generating only specific parts of an image you want to change (currently Gradio only)
|
||||
|
||||
A fully-integrated and easy way to work with Stable Diffusion right from a browser window.
|
||||
# SD WebUI
|
||||
|
||||
An easy way to work with Stable Diffusion right from your browser.
|
||||
|
||||
## Streamlit
|
||||
|
||||
@ -90,30 +78,41 @@ A fully-integrated and easy way to work with Stable Diffusion right from a brows
|
||||
|
||||
**Features:**
|
||||
|
||||
- Clean UI with an easy to use design, with support for widescreen displays.
|
||||
- Dynamic live preview of your generations
|
||||
- Easily customizable presets right from the WebUI (Coming Soon!)
|
||||
- An integrated gallery to show the generations for a prompt or session (Coming soon!)
|
||||
- Better optimization VRAM usage optimization, less errors for bigger generations.
|
||||
- Text2Video - Generate video clips from text prompts right from the WEb UI (WIP)
|
||||
- Concepts Library - Run custom embeddings others have made via textual inversion.
|
||||
- Actively being developed with new features being added and planned - Stay Tuned!
|
||||
- Streamlit is now the new primary UI for the project moving forward.
|
||||
- *Currently in active development and still missing some of the features present in the Gradio Interface.*
|
||||
- Clean UI with an easy to use design, with support for widescreen displays
|
||||
- *Dynamic live preview* of your generations
|
||||
- Easily customizable defaults, right from the WebUI's Settings tab
|
||||
- An integrated gallery to show the generations for a prompt
|
||||
- *Optimized VRAM* usage for bigger generations or usage on lower end GPUs
|
||||
- *Text2Video:* Generate video clips from text prompts right from the WebUI (WIP)
|
||||
- *Concepts Library:* Run custom embeddings others have made via textual inversion.
|
||||
- **Currently in development: [Stable Hord](https://stablehorde.net/) integration; ImgLab, batch inputs, & mask editor from Gradio
|
||||
|
||||
**Prompt Weights & Negative Prompts:**
|
||||
|
||||
To give a token (tag recognized by the AI) a specific or increased weight (emphasis), add `:0.##` to the prompt, where `0.##` is a decimal that will specify the weight of all tokens before the colon.
|
||||
Ex: `cat:0.30, dog:0.70` or `guy riding a bicycle :0.7, incoming car :0.30`
|
||||
|
||||
Negative prompts can be added by using `###` , after which any tokens will be seen as negative.
|
||||
Ex: `cat playing with string ### yarn` will negate `yarn` from the generated image.
|
||||
|
||||
Negatives are a very powerful tool to get rid of contextually similar or related topics, but **be careful when adding them since the AI might see connections you can't**, and end up outputting gibberish
|
||||
|
||||
**Tip:* Try using the same seed with different prompt configurations or weight values see how the AI understands them, it can lead to prompts that are more well-tuned and less prone to error.
|
||||
|
||||
Please see the [Streamlit Documentation](docs/4.streamlit-interface.md) to learn more.
|
||||
|
||||
## Gradio
|
||||
## Gradio [Legacy]
|
||||
|
||||
![](images/gradio/gradio-t2i.png)
|
||||
|
||||
**Features:**
|
||||
|
||||
- Older UI design that is fully functional and feature complete.
|
||||
- Older UI that is functional and feature complete.
|
||||
- Has access to all upscaling models, including LSDR.
|
||||
- Dynamic prompt entry automatically changes your generation settings based on `--params` in a prompt.
|
||||
- Includes quick and easy ways to send generations to Image2Image or the Image Lab for upscaling.
|
||||
- *Note, the Gradio interface is no longer being actively developed and is only receiving bug fixes.*
|
||||
|
||||
**Note: the Gradio interface is no longer being actively developed by Sygil.Dev and is only receiving bug fixes.**
|
||||
|
||||
Please see the [Gradio Documentation](docs/5.gradio-interface.md) to learn more.
|
||||
|
||||
@ -153,11 +152,11 @@ More powerful upscalers that uses a seperate Latent Diffusion model to more clea
|
||||
|
||||
|
||||
|
||||
Please see the [Image Enhancers Documentation](docs/5.image_enhancers.md) to learn more.
|
||||
Please see the [Image Enhancers Documentation](docs/6.image_enhancers.md) to learn more.
|
||||
|
||||
-----
|
||||
|
||||
### *Original Information From The Stable Diffusion Repo*
|
||||
### *Original Information From The Stable Diffusion Repo:*
|
||||
|
||||
# Stable Diffusion
|
||||
|
||||
|
567
Web_based_UI_for_Stable_Diffusion_colab.ipynb
Normal file
567
Web_based_UI_for_Stable_Diffusion_colab.ipynb
Normal file
@ -0,0 +1,567 @@
|
||||
{
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 0,
|
||||
"metadata": {
|
||||
"colab": {
|
||||
"private_outputs": true,
|
||||
"provenance": [],
|
||||
"collapsed_sections": [
|
||||
"5-Bx4AsEoPU-",
|
||||
"xMWVQOg0G1Pj"
|
||||
]
|
||||
},
|
||||
"kernelspec": {
|
||||
"name": "python3",
|
||||
"display_name": "Python 3"
|
||||
},
|
||||
"language_info": {
|
||||
"name": "python"
|
||||
},
|
||||
"accelerator": "GPU"
|
||||
},
|
||||
"cells": [
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"source": [
|
||||
"[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/sd-webui/stable-diffusion-webui/blob/dev/Web_based_UI_for_Stable_Diffusion_colab.ipynb)"
|
||||
],
|
||||
"metadata": {
|
||||
"id": "S5RoIM-5IPZJ"
|
||||
}
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"source": [
|
||||
"# README"
|
||||
],
|
||||
"metadata": {
|
||||
"id": "5-Bx4AsEoPU-"
|
||||
}
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"source": [
|
||||
"###<center>Web-based UI for Stable Diffusion</center>\n",
|
||||
"\n",
|
||||
"## Created by [sd-webui](https://github.com/sd-webui)\n",
|
||||
"\n",
|
||||
"## [Visit sd-webui's Discord Server](https://discord.gg/gyXNe4NySY) [![Discord Server](https://user-images.githubusercontent.com/5977640/190528254-9b5b4423-47ee-4f24-b4f9-fd13fba37518.png)](https://discord.gg/gyXNe4NySY)\n",
|
||||
"\n",
|
||||
"## Installation instructions for:\n",
|
||||
"\n",
|
||||
"- **[Windows](https://sd-webui.github.io/stable-diffusion-webui/docs/1.windows-installation.html)** \n",
|
||||
"- **[Linux](https://sd-webui.github.io/stable-diffusion-webui/docs/2.linux-installation.html)**\n",
|
||||
"\n",
|
||||
"### Want to ask a question or request a feature?\n",
|
||||
"\n",
|
||||
"Come to our [Discord Server](https://discord.gg/gyXNe4NySY) or use [Discussions](https://github.com/sd-webui/stable-diffusion-webui/discussions).\n",
|
||||
"\n",
|
||||
"## Documentation\n",
|
||||
"\n",
|
||||
"[Documentation is located here](https://sd-webui.github.io/stable-diffusion-webui/)\n",
|
||||
"\n",
|
||||
"## Want to contribute?\n",
|
||||
"\n",
|
||||
"Check the [Contribution Guide](CONTRIBUTING.md)\n",
|
||||
"\n",
|
||||
"[sd-webui](https://github.com/sd-webui) main devs:\n",
|
||||
"\n",
|
||||
"* ![hlky's avatar](https://avatars.githubusercontent.com/u/106811348?s=40&v=4) [hlky](https://github.com/hlky)\n",
|
||||
"* ![ZeroCool940711's avatar](https://avatars.githubusercontent.com/u/5977640?s=40&v=4)[ZeroCool940711](https://github.com/ZeroCool940711)\n",
|
||||
"* ![codedealer's avatar](https://avatars.githubusercontent.com/u/4258136?s=40&v=4)[codedealer](https://github.com/codedealer)\n",
|
||||
"\n",
|
||||
"### Project Features:\n",
|
||||
"\n",
|
||||
"* Two great Web UI's to choose from: Streamlit or Gradio\n",
|
||||
"\n",
|
||||
"* No more manually typing parameters, now all you have to do is write your prompt and adjust sliders\n",
|
||||
"\n",
|
||||
"* Built-in image enhancers and upscalers, including GFPGAN and realESRGAN\n",
|
||||
"\n",
|
||||
"* Run additional upscaling models on CPU to save VRAM\n",
|
||||
"\n",
|
||||
"* Textual inversion 🔥: [info](https://textual-inversion.github.io/) - requires enabling, see [here](https://github.com/hlky/sd-enable-textual-inversion), script works as usual without it enabled\n",
|
||||
"\n",
|
||||
"* Advanced img2img editor with Mask and crop capabilities\n",
|
||||
"\n",
|
||||
"* Mask painting 🖌️: Powerful tool for re-generating only specific parts of an image you want to change (currently Gradio only)\n",
|
||||
"\n",
|
||||
"* More diffusion samplers 🔥🔥: A great collection of samplers to use, including:\n",
|
||||
" \n",
|
||||
" - `k_euler` (Default)\n",
|
||||
" - `k_lms`\n",
|
||||
" - `k_euler_a`\n",
|
||||
" - `k_dpm_2`\n",
|
||||
" - `k_dpm_2_a`\n",
|
||||
" - `k_heun`\n",
|
||||
" - `PLMS`\n",
|
||||
" - `DDIM`\n",
|
||||
"\n",
|
||||
"* Loopback ➿: Automatically feed the last generated sample back into img2img\n",
|
||||
"\n",
|
||||
"* Prompt Weighting 🏋️: Adjust the strength of different terms in your prompt\n",
|
||||
"\n",
|
||||
"* Selectable GPU usage with `--gpu <id>`\n",
|
||||
"\n",
|
||||
"* Memory Monitoring 🔥: Shows VRAM usage and generation time after outputting\n",
|
||||
"\n",
|
||||
"* Word Seeds 🔥: Use words instead of seed numbers\n",
|
||||
"\n",
|
||||
"* CFG: Classifier free guidance scale, a feature for fine-tuning your output\n",
|
||||
"\n",
|
||||
"* Automatic Launcher: Activate conda and run Stable Diffusion with a single command\n",
|
||||
"\n",
|
||||
"* Lighter on VRAM: 512x512 Text2Image & Image2Image tested working on 4GB\n",
|
||||
"\n",
|
||||
"* Prompt validation: If your prompt is too long, you will get a warning in the text output field\n",
|
||||
"\n",
|
||||
"* Copy-paste generation parameters: A text output provides generation parameters in an easy to copy-paste form for easy sharing.\n",
|
||||
"\n",
|
||||
"* Correct seeds for batches: If you use a seed of 1000 to generate two batches of two images each, four generated images will have seeds: `1000, 1001, 1002, 1003`.\n",
|
||||
"\n",
|
||||
"* Prompt matrix: Separate multiple prompts using the `|` character, and the system will produce an image for every combination of them.\n",
|
||||
"\n",
|
||||
"* Loopback for Image2Image: A checkbox for img2img allowing to automatically feed output image as input for the next batch. Equivalent to saving output image, and replacing input image with it.\n",
|
||||
"\n",
|
||||
"# Stable Diffusion Web UI\n",
|
||||
"\n",
|
||||
"A fully-integrated and easy way to work with Stable Diffusion right from a browser window.\n",
|
||||
"\n",
|
||||
"## Streamlit\n",
|
||||
"\n",
|
||||
"![](images/streamlit/streamlit-t2i.png)\n",
|
||||
"\n",
|
||||
"**Features:**\n",
|
||||
"\n",
|
||||
"- Clean UI with an easy to use design, with support for widescreen displays.\n",
|
||||
"- Dynamic live preview of your generations\n",
|
||||
"- Easily customizable presets right from the WebUI (Coming Soon!)\n",
|
||||
"- An integrated gallery to show the generations for a prompt or session (Coming soon!)\n",
|
||||
"- Better optimization VRAM usage optimization, less errors for bigger generations.\n",
|
||||
"- Text2Video - Generate video clips from text prompts right from the WEb UI (WIP)\n",
|
||||
"- Concepts Library - Run custom embeddings others have made via textual inversion.\n",
|
||||
"- Actively being developed with new features being added and planned - Stay Tuned!\n",
|
||||
"- Streamlit is now the new primary UI for the project moving forward.\n",
|
||||
"- *Currently in active development and still missing some of the features present in the Gradio Interface.*\n",
|
||||
"\n",
|
||||
"Please see the [Streamlit Documentation](docs/4.streamlit-interface.md) to learn more.\n",
|
||||
"\n",
|
||||
"## Gradio\n",
|
||||
"\n",
|
||||
"![](images/gradio/gradio-t2i.png)\n",
|
||||
"\n",
|
||||
"**Features:**\n",
|
||||
"\n",
|
||||
"- Older UI design that is fully functional and feature complete.\n",
|
||||
"- Has access to all upscaling models, including LSDR.\n",
|
||||
"- Dynamic prompt entry automatically changes your generation settings based on `--params` in a prompt.\n",
|
||||
"- Includes quick and easy ways to send generations to Image2Image or the Image Lab for upscaling.\n",
|
||||
"- *Note, the Gradio interface is no longer being actively developed and is only receiving bug fixes.*\n",
|
||||
"\n",
|
||||
"Please see the [Gradio Documentation](docs/5.gradio-interface.md) to learn more.\n",
|
||||
"\n",
|
||||
"## Image Upscalers\n",
|
||||
"\n",
|
||||
"---\n",
|
||||
"\n",
|
||||
"### GFPGAN\n",
|
||||
"\n",
|
||||
"![](images/GFPGAN.png)\n",
|
||||
"\n",
|
||||
"Lets you improve faces in pictures using the GFPGAN model. There is a checkbox in every tab to use GFPGAN at 100%, and also a separate tab that just allows you to use GFPGAN on any picture, with a slider that controls how strong the effect is.\n",
|
||||
"\n",
|
||||
"If you want to use GFPGAN to improve generated faces, you need to install it separately.\n",
|
||||
"Download [GFPGANv1.4.pth](https://github.com/TencentARC/GFPGAN/releases/download/v1.3.4/GFPGANv1.4.pth) and put it\n",
|
||||
"into the `/stable-diffusion-webui/models/gfpgan` directory. \n",
|
||||
"\n",
|
||||
"### RealESRGAN\n",
|
||||
"\n",
|
||||
"![](images/RealESRGAN.png)\n",
|
||||
"\n",
|
||||
"Lets you double the resolution of generated images. There is a checkbox in every tab to use RealESRGAN, and you can choose between the regular upscaler and the anime version.\n",
|
||||
"There is also a separate tab for using RealESRGAN on any picture.\n",
|
||||
"\n",
|
||||
"Download [RealESRGAN_x4plus.pth](https://github.com/xinntao/Real-ESRGAN/releases/download/v0.1.0/RealESRGAN_x4plus.pth) and [RealESRGAN_x4plus_anime_6B.pth](https://github.com/xinntao/Real-ESRGAN/releases/download/v0.2.2.4/RealESRGAN_x4plus_anime_6B.pth).\n",
|
||||
"Put them into the `stable-diffusion-webui/models/realesrgan` directory. \n",
|
||||
"\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"### LSDR\n",
|
||||
"\n",
|
||||
"Download **LDSR** [project.yaml](https://heibox.uni-heidelberg.de/f/31a76b13ea27482981b4/?dl=1) and [model last.cpkt](https://heibox.uni-heidelberg.de/f/578df07c8fc04ffbadf3/?dl=1). Rename last.ckpt to model.ckpt and place both under `stable-diffusion-webui/models/ldsr/`\n",
|
||||
"\n",
|
||||
"### GoBig, and GoLatent *(Currently on the Gradio version Only)*\n",
|
||||
"\n",
|
||||
"More powerful upscalers that uses a seperate Latent Diffusion model to more cleanly upscale images.\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"Please see the [Image Enhancers Documentation](docs/6.image_enhancers.md) to learn more.\n",
|
||||
"\n",
|
||||
"-----\n",
|
||||
"\n",
|
||||
"### *Original Information From The Stable Diffusion Repo*\n",
|
||||
"\n",
|
||||
"# Stable Diffusion\n",
|
||||
"\n",
|
||||
"*Stable Diffusion was made possible thanks to a collaboration with [Stability AI](https://stability.ai/) and [Runway](https://runwayml.com/) and builds upon our previous work:*\n",
|
||||
"\n",
|
||||
"[**High-Resolution Image Synthesis with Latent Diffusion Models**](https://ommer-lab.com/research/latent-diffusion-models/)<br/>\n",
|
||||
"[Robin Rombach](https://github.com/rromb)\\*,\n",
|
||||
"[Andreas Blattmann](https://github.com/ablattmann)\\*,\n",
|
||||
"[Dominik Lorenz](https://github.com/qp-qp)\\,\n",
|
||||
"[Patrick Esser](https://github.com/pesser),\n",
|
||||
"[Björn Ommer](https://hci.iwr.uni-heidelberg.de/Staff/bommer)<br/>\n",
|
||||
"\n",
|
||||
"**CVPR '22 Oral**\n",
|
||||
"\n",
|
||||
"which is available on [GitHub](https://github.com/CompVis/latent-diffusion). PDF at [arXiv](https://arxiv.org/abs/2112.10752). Please also visit our [Project page](https://ommer-lab.com/research/latent-diffusion-models/).\n",
|
||||
"\n",
|
||||
"[Stable Diffusion](#stable-diffusion-v1) is a latent text-to-image diffusion\n",
|
||||
"model.\n",
|
||||
"Thanks to a generous compute donation from [Stability AI](https://stability.ai/) and support from [LAION](https://laion.ai/), we were able to train a Latent Diffusion Model on 512x512 images from a subset of the [LAION-5B](https://laion.ai/blog/laion-5b/) database. \n",
|
||||
"Similar to Google's [Imagen](https://arxiv.org/abs/2205.11487), \n",
|
||||
"this model uses a frozen CLIP ViT-L/14 text encoder to condition the model on text prompts.\n",
|
||||
"With its 860M UNet and 123M text encoder, the model is relatively lightweight and runs on a GPU with at least 10GB VRAM.\n",
|
||||
"See [this section](#stable-diffusion-v1) below and the [model card](https://huggingface.co/CompVis/stable-diffusion).\n",
|
||||
"\n",
|
||||
"## Stable Diffusion v1\n",
|
||||
"\n",
|
||||
"Stable Diffusion v1 refers to a specific configuration of the model\n",
|
||||
"architecture that uses a downsampling-factor 8 autoencoder with an 860M UNet\n",
|
||||
"and CLIP ViT-L/14 text encoder for the diffusion model. The model was pretrained on 256x256 images and \n",
|
||||
"then finetuned on 512x512 images.\n",
|
||||
"\n",
|
||||
"*Note: Stable Diffusion v1 is a general text-to-image diffusion model and therefore mirrors biases and (mis-)conceptions that are present\n",
|
||||
"in its training data. \n",
|
||||
"Details on the training procedure and data, as well as the intended use of the model can be found in the corresponding [model card](https://huggingface.co/CompVis/stable-diffusion).\n",
|
||||
"\n",
|
||||
"## Comments\n",
|
||||
"\n",
|
||||
"- Our codebase for the diffusion models builds heavily on [OpenAI's ADM codebase](https://github.com/openai/guided-diffusion)\n",
|
||||
" and [https://github.com/lucidrains/denoising-diffusion-pytorch](https://github.com/lucidrains/denoising-diffusion-pytorch). \n",
|
||||
" Thanks for open-sourcing!\n",
|
||||
"\n",
|
||||
"- The implementation of the transformer encoder is from [x-transformers](https://github.com/lucidrains/x-transformers) by [lucidrains](https://github.com/lucidrains?tab=repositories). \n",
|
||||
"\n",
|
||||
"## BibTeX\n",
|
||||
"\n",
|
||||
"```\n",
|
||||
"@misc{rombach2021highresolution,\n",
|
||||
" title={High-Resolution Image Synthesis with Latent Diffusion Models}, \n",
|
||||
" author={Robin Rombach and Andreas Blattmann and Dominik Lorenz and Patrick Esser and Björn Ommer},\n",
|
||||
" year={2021},\n",
|
||||
" eprint={2112.10752},\n",
|
||||
" archivePrefix={arXiv},\n",
|
||||
" primaryClass={cs.CV}\n",
|
||||
"}\n",
|
||||
"\n",
|
||||
"```"
|
||||
],
|
||||
"metadata": {
|
||||
"id": "z4kQYMPQn4d-"
|
||||
}
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"source": [
|
||||
"# Utils"
|
||||
],
|
||||
"metadata": {
|
||||
"id": "IZjJSr-WPNxB"
|
||||
}
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"metadata": {
|
||||
"id": "yKFE49BHaWTb",
|
||||
"cellView": "form"
|
||||
},
|
||||
"source": [
|
||||
"#@title <-- Press play on the music player to keep the tab alive, then you can continue with everything below (Uses only 13MB of data)\n",
|
||||
"%%html\n",
|
||||
"<b>Press play on the music player to keep the tab alive, then start your generation below (Uses only 13MB of data)</b><br/>\n",
|
||||
"<audio src=\"https://henk.tech/colabkobold/silence.m4a\" controls>"
|
||||
],
|
||||
"execution_count": null,
|
||||
"outputs": []
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {
|
||||
"id": "-o8F1NCNTK2u"
|
||||
},
|
||||
"source": [
|
||||
"JS to prevent idle timeout:\n",
|
||||
"\n",
|
||||
"Press F12 OR CTRL + SHIFT + I OR right click on this website -> inspect.\n",
|
||||
"Then click on the console tab and paste in the following code.\n",
|
||||
"\n",
|
||||
"```javascript\n",
|
||||
"function ClickConnect(){\n",
|
||||
"console.log(\"Working\");\n",
|
||||
"document.querySelector(\"colab-toolbar-button#connect\").click()\n",
|
||||
"}\n",
|
||||
"setInterval(ClickConnect,60000)\n",
|
||||
"```"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"metadata": {
|
||||
"cellView": "form",
|
||||
"id": "eq0-E5mjSpmP"
|
||||
},
|
||||
"source": [
|
||||
"#@markdown #**Check GPU type**\n",
|
||||
"#@markdown ### Factory reset runtime if you don't have the desired GPU.\n",
|
||||
"\n",
|
||||
"#@markdown ---\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"#@markdown V100 = Excellent (*Available only for Colab Pro users*)\n",
|
||||
"\n",
|
||||
"#@markdown P100 = Very Good\n",
|
||||
"\n",
|
||||
"#@markdown T4 = Good (*preferred*)\n",
|
||||
"\n",
|
||||
"#@markdown K80 = Meh\n",
|
||||
"\n",
|
||||
"#@markdown P4 = (*Not Recommended*) \n",
|
||||
"\n",
|
||||
"#@markdown ---\n",
|
||||
"\n",
|
||||
"!nvidia-smi -L"
|
||||
],
|
||||
"execution_count": null,
|
||||
"outputs": []
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"source": [
|
||||
"# Clone the repository and install dependencies."
|
||||
],
|
||||
"metadata": {
|
||||
"id": "WcZH9VE6JOCd"
|
||||
}
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {
|
||||
"id": "NG3JxFE6IreU"
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"!git clone https://github.com/sd-webui/stable-diffusion-webui.git"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"source": [
|
||||
"%cd /content/stable-diffusion-webui/"
|
||||
],
|
||||
"metadata": {
|
||||
"id": "pZHGf03Vp305"
|
||||
},
|
||||
"execution_count": null,
|
||||
"outputs": []
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"source": [
|
||||
"!git checkout dev\n",
|
||||
"!git pull"
|
||||
],
|
||||
"metadata": {
|
||||
"id": "__8TYN2_jfga"
|
||||
},
|
||||
"execution_count": null,
|
||||
"outputs": []
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"source": [
|
||||
"!pip install condacolab\n",
|
||||
"import condacolab\n",
|
||||
"condacolab.install_from_url(\"https://github.com/conda-forge/miniforge/releases/download/4.14.0-0/Mambaforge-4.14.0-0-Linux-x86_64.sh\")\n",
|
||||
"\n",
|
||||
"import condacolab\n",
|
||||
"condacolab.check()\n",
|
||||
"\n",
|
||||
"# The runtime will crash after this, its normal as we are forcing a restart of the runtime from code. Just hit \"Run All\" again."
|
||||
],
|
||||
"metadata": {
|
||||
"id": "cDu33xkdJ5mD"
|
||||
},
|
||||
"execution_count": null,
|
||||
"outputs": []
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"source": [
|
||||
"!python --version"
|
||||
],
|
||||
"metadata": {
|
||||
"id": "xd_2zFWSfNCB"
|
||||
},
|
||||
"execution_count": null,
|
||||
"outputs": []
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"source": [
|
||||
"!mamba install cudatoolkit=11.3 git numpy=1.22.3 pip=20.3 python=3.8.5 pytorch=1.11.0 scikit-image=0.19.2 torchvision=0.12.0 -y"
|
||||
],
|
||||
"metadata": {
|
||||
"id": "dmN2igp5Yk3z"
|
||||
},
|
||||
"execution_count": null,
|
||||
"outputs": []
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"source": [
|
||||
"%cd /content/stable-diffusion-webui/"
|
||||
],
|
||||
"metadata": {
|
||||
"id": "vXX0OaR8KyLQ"
|
||||
},
|
||||
"execution_count": null,
|
||||
"outputs": []
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"source": [
|
||||
"!pip install -r requirements.txt"
|
||||
],
|
||||
"metadata": {
|
||||
"id": "REEG0zJtRC8w"
|
||||
},
|
||||
"execution_count": null,
|
||||
"outputs": []
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"source": [
|
||||
"%cd /content/stable-diffusion-webui/"
|
||||
],
|
||||
"metadata": {
|
||||
"id": "Kp1PjqxPijZ1"
|
||||
},
|
||||
"execution_count": null,
|
||||
"outputs": []
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"source": [
|
||||
"!npm install localtunnel"
|
||||
],
|
||||
"metadata": {
|
||||
"id": "FHyVuT5aSM2G"
|
||||
},
|
||||
"execution_count": null,
|
||||
"outputs": []
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"source": [
|
||||
"# Huggingface Token"
|
||||
],
|
||||
"metadata": {
|
||||
"id": "RnlaaLAVGYal"
|
||||
}
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"source": [
|
||||
"!git config --global credential.helper store\n",
|
||||
"!huggingface-cli login"
|
||||
],
|
||||
"metadata": {
|
||||
"id": "IsbG7fvIrKwg"
|
||||
},
|
||||
"execution_count": null,
|
||||
"outputs": []
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"source": [
|
||||
"# Google drive config"
|
||||
],
|
||||
"metadata": {
|
||||
"id": "xMWVQOg0G1Pj"
|
||||
}
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"source": [
|
||||
"import os, shutil\n",
|
||||
"mount_google_drive = True #@param {type:\"boolean\"}\n",
|
||||
"save_outputs_to_drive = True #@param {type:\"boolean\"}\n",
|
||||
"#save_model_to_drive = True #@param {type:\"boolean\"}\n",
|
||||
"\n",
|
||||
"if mount_google_drive:\n",
|
||||
" # Mount google drive to store your outputs.\n",
|
||||
" from google.colab import drive\n",
|
||||
" drive.mount('/content/drive/', force_remount=True)\n",
|
||||
"\n",
|
||||
"if save_outputs_to_drive:\n",
|
||||
" os.makedirs(\"/content/drive/MyDrive/stable-diffusion-webui/outputs\", exist_ok=True)\n",
|
||||
" #os.makedirs(\"/content/stable-diffusion-webui/outputs\", exist_ok=True)\n",
|
||||
" os.symlink(\"/content/drive/MyDrive/stable-diffusion-webui/outputs\", \"/content/stable-diffusion-webui/outputs\", target_is_directory=True)\n"
|
||||
],
|
||||
"metadata": {
|
||||
"cellView": "form",
|
||||
"id": "pcSWo9Zkzbsf"
|
||||
},
|
||||
"execution_count": null,
|
||||
"outputs": []
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"source": [
|
||||
"#Launch the WebUI"
|
||||
],
|
||||
"metadata": {
|
||||
"id": "csi6cj6gQZmC"
|
||||
}
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"source": [
|
||||
"!streamlit run scripts/webui_streamlit.py --theme.base dark --server.headless True &>/content/logs.txt &"
|
||||
],
|
||||
"metadata": {
|
||||
"id": "SN7C9-dyRlkM"
|
||||
},
|
||||
"execution_count": null,
|
||||
"outputs": []
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"source": [
|
||||
"## Expose the port 8501\n",
|
||||
"Then just click in the `url` showed.\n",
|
||||
"\n",
|
||||
"A `log.txt`file will be created."
|
||||
],
|
||||
"metadata": {
|
||||
"id": "h_KW9juhOCuH"
|
||||
}
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"source": [
|
||||
"!npx localtunnel --port 8501"
|
||||
],
|
||||
"metadata": {
|
||||
"id": "5whXm2nfSZ39"
|
||||
},
|
||||
"execution_count": null,
|
||||
"outputs": []
|
||||
}
|
||||
]
|
||||
}
|
@ -19,14 +19,15 @@
|
||||
# You may add overrides in a file named "userconfig_streamlit.yaml" in this folder, which can contain any subset
|
||||
# of the properties below.
|
||||
general:
|
||||
version: 1.20.0
|
||||
streamlit_telemetry: False
|
||||
default_theme: dark
|
||||
huggingface_token: ""
|
||||
huggingface_token: ''
|
||||
gpu: 0
|
||||
outdir: outputs
|
||||
default_model: "Stable Diffusion v1.4"
|
||||
default_model: "Stable Diffusion v1.5"
|
||||
default_model_config: "configs/stable-diffusion/v1-inference.yaml"
|
||||
default_model_path: "models/ldm/stable-diffusion-v1/model.ckpt"
|
||||
default_model_path: "models/ldm/stable-diffusion-v1/Stable Diffusion v1.5.ckpt"
|
||||
use_sd_concepts_library: True
|
||||
sd_concepts_library_folder: "models/custom/sd-concepts-library"
|
||||
GFPGAN_dir: "./models/gfpgan"
|
||||
@ -38,12 +39,14 @@ general:
|
||||
upscaling_method: "RealESRGAN"
|
||||
outdir_txt2img: outputs/txt2img
|
||||
outdir_img2img: outputs/img2img
|
||||
outdir_img2txt: outputs/img2txt
|
||||
gfpgan_cpu: False
|
||||
esrgan_cpu: False
|
||||
extra_models_cpu: False
|
||||
extra_models_gpu: False
|
||||
gfpgan_gpu: 0
|
||||
esrgan_gpu: 0
|
||||
keep_all_models_loaded: False
|
||||
save_metadata: True
|
||||
save_format: "png"
|
||||
skip_grid: False
|
||||
@ -62,6 +65,9 @@ general:
|
||||
update_preview: True
|
||||
update_preview_frequency: 10
|
||||
|
||||
debug:
|
||||
enable_hydralit: False
|
||||
|
||||
txt2img:
|
||||
prompt:
|
||||
width:
|
||||
@ -79,7 +85,6 @@ txt2img:
|
||||
cfg_scale:
|
||||
value: 7.5
|
||||
min_value: 1.0
|
||||
max_value: 30.0
|
||||
step: 0.5
|
||||
|
||||
seed: ""
|
||||
@ -126,8 +131,8 @@ txt2img:
|
||||
write_info_files: True
|
||||
|
||||
txt2vid:
|
||||
default_model: "CompVis/stable-diffusion-v1-4"
|
||||
custom_models_list: ["CompVis/stable-diffusion-v1-4"]
|
||||
default_model: "runwayml/stable-diffusion-v1-5"
|
||||
custom_models_list: ["runwayml/stable-diffusion-v1-5", "CompVis/stable-diffusion-v1-4", "hakurei/waifu-diffusion"]
|
||||
prompt:
|
||||
width:
|
||||
value: 512
|
||||
@ -144,7 +149,6 @@ txt2vid:
|
||||
cfg_scale:
|
||||
value: 7.5
|
||||
min_value: 1.0
|
||||
max_value: 30.0
|
||||
step: 0.5
|
||||
|
||||
batch_count:
|
||||
@ -179,6 +183,7 @@ txt2vid:
|
||||
group_by_prompt: True
|
||||
write_info_files: True
|
||||
do_loop: False
|
||||
use_lerp_for_text: False
|
||||
save_as_jpg: False
|
||||
use_GFPGAN: False
|
||||
use_RealESRGAN: False
|
||||
@ -194,16 +199,16 @@ txt2vid:
|
||||
|
||||
beta_start:
|
||||
value: 0.00085
|
||||
min_value: 0.0001
|
||||
max_value: 0.0300
|
||||
step: 0.0001
|
||||
min_value: 0.00010
|
||||
max_value: 0.03000
|
||||
step: 0.00010
|
||||
format: "%.5f"
|
||||
|
||||
beta_end:
|
||||
value: 0.012
|
||||
min_value: 0.0001
|
||||
max_value: 0.0300
|
||||
step: 0.0001
|
||||
value: 0.01200
|
||||
min_value: 0.00010
|
||||
max_value: 0.03000
|
||||
step: 0.00010
|
||||
format: "%.5f"
|
||||
|
||||
beta_scheduler_type: "scaled_linear"
|
||||
@ -249,7 +254,6 @@ img2img:
|
||||
cfg_scale:
|
||||
value: 7.5
|
||||
min_value: 1.0
|
||||
max_value: 30.0
|
||||
step: 0.5
|
||||
|
||||
batch_count:
|
||||
@ -272,9 +276,8 @@ img2img:
|
||||
|
||||
find_noise_steps:
|
||||
value: 100
|
||||
min_value: 0
|
||||
max_value: 500
|
||||
step: 10
|
||||
min_value: 100
|
||||
step: 100
|
||||
|
||||
LDSR_config:
|
||||
sampling_steps: 50
|
||||
@ -322,12 +325,12 @@ daisi_app:
|
||||
model_manager:
|
||||
models:
|
||||
stable_diffusion:
|
||||
model_name: "Stable Diffusion v1.4"
|
||||
model_name: "Stable Diffusion v1.5"
|
||||
save_location: "./models/ldm/stable-diffusion-v1"
|
||||
files:
|
||||
model_ckpt:
|
||||
file_name: "model.ckpt"
|
||||
download_link: "https://www.googleapis.com/storage/v1/b/aai-blog-files/o/sd-v1-4.ckpt?alt=media"
|
||||
file_name: "Stable Diffusion v1.5.ckpt"
|
||||
download_link: "https://huggingface.co/runwayml/stable-diffusion-v1-5/resolve/main/v1-5-pruned-emaonly.ckpt"
|
||||
|
||||
gfpgan:
|
||||
model_name: "GFPGAN"
|
||||
@ -359,12 +362,12 @@ model_manager:
|
||||
|
||||
|
||||
waifu_diffusion:
|
||||
model_name: "Waifu Diffusion v1.2"
|
||||
model_name: "Waifu Diffusion v1.3"
|
||||
save_location: "./models/custom"
|
||||
files:
|
||||
waifu_diffusion:
|
||||
file_name: "waifu-diffusion.ckpt"
|
||||
download_link: "https://huggingface.co/crumb/pruned-waifu-diffusion/resolve/main/model-pruned.ckpt"
|
||||
file_name: "Waifu-Diffusion-v1-3 Full ema.ckpt"
|
||||
download_link: "https://huggingface.co/hakurei/waifu-diffusion-v1-3/resolve/main/wd-v1-3-full.ckpt"
|
||||
|
||||
|
||||
trinart_stable_diffusion:
|
||||
|
26
data/tags/config.json
Normal file
26
data/tags/config.json
Normal file
@ -0,0 +1,26 @@
|
||||
{
|
||||
"tagFile": "danbooru.csv",
|
||||
"maxResults": 5,
|
||||
"replaceUnderscores": true,
|
||||
"escapeParentheses": true,
|
||||
"colors": {
|
||||
"danbooru": {
|
||||
"0": ["lightblue", "dodgerblue"],
|
||||
"1": ["indianred", "firebrick"],
|
||||
"3": ["violet", "darkorchid"],
|
||||
"4": ["lightgreen", "darkgreen"],
|
||||
"5": ["orange", "darkorange"]
|
||||
},
|
||||
"e621": {
|
||||
"-1": ["red", "maroon"],
|
||||
"0": ["lightblue", "dodgerblue"],
|
||||
"1": ["gold", "goldenrod"],
|
||||
"3": ["violet", "darkorchid"],
|
||||
"4": ["lightgreen", "darkgreen"],
|
||||
"5": ["tomato", "darksalmon"],
|
||||
"6": ["red", "maroon"],
|
||||
"7": ["whitesmoke", "black"],
|
||||
"8": ["seagreen", "darkseagreen"]
|
||||
}
|
||||
}
|
||||
}
|
109721
data/tags/danbooru.csv
Normal file
109721
data/tags/danbooru.csv
Normal file
File diff suppressed because it is too large
Load Diff
66094
data/tags/e621.csv
Normal file
66094
data/tags/e621.csv
Normal file
File diff suppressed because it is too large
Load Diff
29991
data/tags/key_phrases.json
Normal file
29991
data/tags/key_phrases.json
Normal file
File diff suppressed because it is too large
Load Diff
1
data/tags/thumbnails.json
Normal file
1
data/tags/thumbnails.json
Normal file
File diff suppressed because one or more lines are too long
@ -26,10 +26,11 @@ button[data-baseweb="tab"] {
|
||||
}
|
||||
|
||||
/* Image Container (only appear after run finished)//center the image, especially better looks in wide screen */
|
||||
.css-du1fp8 {
|
||||
justify-content: center;
|
||||
.css-1kyxreq{
|
||||
justify-content: center;
|
||||
}
|
||||
|
||||
|
||||
/* Streamlit header */
|
||||
.css-1avcm0n {
|
||||
background-color: transparent;
|
||||
@ -135,6 +136,7 @@ div.gallery:hover {
|
||||
/********************************************************************
|
||||
Hide anchor links on titles
|
||||
*********************************************************************/
|
||||
/*
|
||||
.css-15zrgzn {
|
||||
display: none
|
||||
}
|
||||
@ -145,8 +147,32 @@ div.gallery:hover {
|
||||
display: none
|
||||
}
|
||||
|
||||
/* Make the text area widget have a similar height as the text input field*/
|
||||
.st-ex{
|
||||
/* Make the text area widget have a similar height as the text input field */
|
||||
.st-dy{
|
||||
height: 54px;
|
||||
min-height: 25px;
|
||||
}
|
||||
}
|
||||
.css-17useex{
|
||||
gap: 3px;
|
||||
|
||||
}
|
||||
|
||||
/* Remove some empty spaces to make the UI more compact. */
|
||||
.css-18e3th9{
|
||||
padding-left: 10px;
|
||||
padding-right: 10px;
|
||||
position: unset !important; /* Fixes the layout/page going up when an expander or another item is expanded and then collapsed */
|
||||
}
|
||||
.css-k1vhr4{
|
||||
padding-top: initial;
|
||||
}
|
||||
.css-ret2ud{
|
||||
padding-left: 10px;
|
||||
padding-right: 25px;
|
||||
gap: initial;
|
||||
display: initial;
|
||||
}
|
||||
|
||||
.css-w5z5an{
|
||||
gap: 1px;
|
||||
}
|
||||
|
@ -88,3 +88,11 @@ input[type=number]:disabled { -moz-appearance: textfield; }
|
||||
/* fix buttons layouts */
|
||||
|
||||
}
|
||||
|
||||
/* Gradio 3.4 FIXES */
|
||||
#prompt_row button {
|
||||
max-width: 20ch;
|
||||
}
|
||||
#text2img_col2 {
|
||||
flex-grow: 2 !important;
|
||||
}
|
||||
|
@ -65,7 +65,7 @@ def draw_gradio_ui(opt, img2img=lambda x: x, txt2img=lambda x: x, imgproc=lambda
|
||||
|
||||
txt2img_dimensions_info_text_box = gr.Textbox(
|
||||
label="Aspect ratio (4:3 = 1.333 | 16:9 = 1.777 | 21:9 = 2.333)")
|
||||
with gr.Column():
|
||||
with gr.Column(elem_id="text2img_col2"):
|
||||
with gr.Box():
|
||||
output_txt2img_gallery = gr.Gallery(label="Images", elem_id="txt2img_gallery_output").style(
|
||||
grid=[4, 4])
|
||||
@ -312,7 +312,7 @@ def draw_gradio_ui(opt, img2img=lambda x: x, txt2img=lambda x: x, imgproc=lambda
|
||||
label='Batch count (how many batches of images to generate)',
|
||||
value=img2img_defaults['n_iter'])
|
||||
img2img_dimensions_info_text_box = gr.Textbox(
|
||||
label="Aspect ratio (4:3 = 1.333 | 16:9 = 1.777 | 21:9 = 2.333)")
|
||||
label="Aspect ratio (4:3 = 1.333 | 16:9 = 1.777 | 21:9 = 2.333)", lines="2")
|
||||
with gr.Column():
|
||||
img2img_steps = gr.Slider(minimum=1, maximum=250, step=1, label="Sampling Steps",
|
||||
value=img2img_defaults['ddim_steps'])
|
||||
|
@ -58,20 +58,23 @@ IF "%v_conda_path%"=="" (
|
||||
|
||||
:CONDA_FOUND
|
||||
echo Stashing local changes and pulling latest update...
|
||||
git status --porcelain=1 -uno | findstr . && set "HasChanges=1" || set "HasChanges=0"
|
||||
call git stash
|
||||
call git pull
|
||||
IF "%HasChanges%" == "0" GOTO SKIP_RESTORE
|
||||
|
||||
set /P restore="Do you want to restore changes you made before updating? (Y/N): "
|
||||
IF /I "%restore%" == "N" (
|
||||
echo Removing changes please wait...
|
||||
echo Removing changes...
|
||||
call git stash drop
|
||||
echo Changes removed, press any key to continue...
|
||||
pause >nul
|
||||
echo "Changes removed"
|
||||
) ELSE IF /I "%restore%" == "Y" (
|
||||
echo Restoring changes, please wait...
|
||||
echo Restoring changes...
|
||||
call git stash pop --quiet
|
||||
echo Changes restored, press any key to continue...
|
||||
pause >nul
|
||||
echo "Changes restored"
|
||||
)
|
||||
|
||||
:SKIP_RESTORE
|
||||
call "%v_conda_path%\Scripts\activate.bat"
|
||||
|
||||
for /f "delims=" %%a in ('git log -1 --format^="%%H" -- environment.yaml') DO set v_cur_hash=%%a
|
||||
|
@ -162,7 +162,7 @@ start_initialization () {
|
||||
echo "Your model file does not exist! Place it in 'models/ldm/stable-diffusion-v1' with the name 'model.ckpt'."
|
||||
exit 1
|
||||
fi
|
||||
printf "\nStarting Stable Horde Bridg: Please Wait...\n"; python scripts/relauncher.py --bridge -v "$@"; break;
|
||||
printf "\nStarting Stable Horde Bridge: Please Wait...\n"; python scripts/relauncher.py --bridge -v "$@"; break;
|
||||
|
||||
}
|
||||
|
||||
|
@ -22,7 +22,7 @@ omegaconf==2.2.3
|
||||
Jinja2==3.1.2 # Jinja2 is required by Gradio
|
||||
|
||||
# Environment Dependencies for WebUI (gradio)
|
||||
gradio==3.1.6
|
||||
gradio==3.4.1
|
||||
|
||||
# Environment Dependencies for WebUI (streamlit)
|
||||
streamlit==1.13.0
|
||||
@ -34,7 +34,16 @@ streamlit-tensorboard==0.0.2
|
||||
hydralit==1.0.14
|
||||
hydralit_components==1.0.10
|
||||
stqdm==0.0.4
|
||||
diffusers==0.4.1
|
||||
uvicorn
|
||||
fastapi
|
||||
|
||||
# txt2vid
|
||||
stable-diffusion-videos==0.5.3
|
||||
diffusers==0.4
|
||||
librosa==0.9.2
|
||||
|
||||
# img2img inpainting
|
||||
streamlit-drawable-canvas==0.9.2
|
||||
|
||||
# Img2text
|
||||
ftfy==6.1.1
|
||||
@ -76,7 +85,7 @@ wget
|
||||
basicsr==1.4.2 # required by RealESRGAN
|
||||
gfpgan==1.3.8 # GFPGAN
|
||||
realesrgan==0.3.0 # RealESRGAN brings in GFPGAN as a requirement
|
||||
git+https://github.com/CompVis/latent-diffusion
|
||||
-e git+https://github.com/devilismyfriend/latent-diffusion#egg=latent-diffusion
|
||||
|
||||
## for monocular depth estimation
|
||||
tensorflow==2.10.0
|
||||
|
36
scripts/APIServer.py
Normal file
36
scripts/APIServer.py
Normal file
@ -0,0 +1,36 @@
|
||||
# This file is part of stable-diffusion-webui (https://github.com/sd-webui/stable-diffusion-webui/).
|
||||
|
||||
# Copyright 2022 sd-webui team.
|
||||
# This program is free software: you can redistribute it and/or modify
|
||||
# it under the terms of the GNU Affero General Public License as published by
|
||||
# the Free Software Foundation, either version 3 of the License, or
|
||||
# (at your option) any later version.
|
||||
|
||||
# This program is distributed in the hope that it will be useful,
|
||||
# but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
# GNU Affero General Public License for more details.
|
||||
|
||||
# You should have received a copy of the GNU Affero General Public License
|
||||
# along with this program. If not, see <http://www.gnu.org/licenses/>.
|
||||
# base webui import and utils.
|
||||
#from sd_utils import *
|
||||
from sd_utils import *
|
||||
# streamlit imports
|
||||
|
||||
#streamlit components section
|
||||
|
||||
#other imports
|
||||
import os, time, requests
|
||||
import sys
|
||||
#from fastapi import FastAPI
|
||||
#import uvicorn
|
||||
|
||||
# Temp imports
|
||||
|
||||
# end of imports
|
||||
#---------------------------------------------------------------------------------------------------------------
|
||||
|
||||
|
||||
def layout():
|
||||
st.info("Under Construction. :construction_worker:")
|
@ -12,15 +12,18 @@
|
||||
# GNU Affero General Public License for more details.
|
||||
|
||||
# You should have received a copy of the GNU Affero General Public License
|
||||
# along with this program. If not, see <http://www.gnu.org/licenses/>.
|
||||
# along with this program. If not, see <http://www.gnu.org/licenses/>.
|
||||
# base webui import and utils.
|
||||
from sd_utils import *
|
||||
# streamlit imports
|
||||
|
||||
|
||||
#other imports
|
||||
from requests.auth import HTTPBasicAuth
|
||||
from requests import HTTPError
|
||||
from stqdm import stqdm
|
||||
|
||||
# Temp imports
|
||||
# Temp imports
|
||||
|
||||
|
||||
# end of imports
|
||||
@ -28,16 +31,33 @@ from sd_utils import *
|
||||
def download_file(file_name, file_path, file_url):
|
||||
if not os.path.exists(file_path):
|
||||
os.makedirs(file_path)
|
||||
|
||||
|
||||
if not os.path.exists(os.path.join(file_path , file_name)):
|
||||
print('Downloading ' + file_name + '...')
|
||||
# TODO - add progress bar in streamlit
|
||||
# download file with `requests``
|
||||
with requests.get(file_url, stream=True) as r:
|
||||
r.raise_for_status()
|
||||
with open(os.path.join(file_path, file_name), 'wb') as f:
|
||||
for chunk in r.iter_content(chunk_size=8192):
|
||||
f.write(chunk)
|
||||
if file_name == "Stable Diffusion v1.5":
|
||||
if "huggingface_token" not in st.session_state or st.session_state["defaults"].general.huggingface_token == "None":
|
||||
if "progress_bar_text" in st.session_state:
|
||||
st.session_state["progress_bar_text"].error(
|
||||
"You need a huggingface token in order to use the Text to Video tab. Use the Settings page from the sidebar on the left to add your token."
|
||||
)
|
||||
raise OSError("You need a huggingface token in order to use the Text to Video tab. Use the Settings page from the sidebar on the left to add your token.")
|
||||
|
||||
try:
|
||||
with requests.get(file_url, auth = HTTPBasicAuth('token', st.session_state.defaults.general.huggingface_token), stream=True) as r:
|
||||
r.raise_for_status()
|
||||
with open(os.path.join(file_path, file_name), 'wb') as f:
|
||||
for chunk in stqdm(r.iter_content(chunk_size=8192), backend=True, unit="kb"):
|
||||
f.write(chunk)
|
||||
except HTTPError:
|
||||
if "huggingface.co" in file_url:
|
||||
if "resolve"in file_url:
|
||||
repo_url = file_url.split("resolve")[0]
|
||||
|
||||
st.session_state["progress_bar_text"].error(
|
||||
f"You need to accept the license for the model in order to be able to download it. "
|
||||
f"Please visit {repo_url} and accept the lincense there, then try again to download the model.")
|
||||
|
||||
else:
|
||||
print(file_name + ' already exists.')
|
||||
@ -51,18 +71,18 @@ def download_model(models, model_name):
|
||||
|
||||
def layout():
|
||||
#search = st.text_input(label="Search", placeholder="Type the name of the model you want to search for.", help="")
|
||||
|
||||
colms = st.columns((1, 3, 5, 5))
|
||||
columns = ["№",'Model Name','Save Location','Download Link']
|
||||
|
||||
|
||||
colms = st.columns((1, 3, 3, 5, 5))
|
||||
columns = ["№", 'Model Name', 'Save Location', "Download", 'Download Link']
|
||||
|
||||
models = st.session_state["defaults"].model_manager.models
|
||||
|
||||
for col, field_name in zip(colms, columns):
|
||||
# table header
|
||||
col.write(field_name)
|
||||
|
||||
|
||||
for x, model_name in enumerate(models):
|
||||
col1, col2, col3, col4 = st.columns((1, 3, 4, 6))
|
||||
col1, col2, col3, col4, col5 = st.columns((1, 3, 3, 3, 6))
|
||||
col1.write(x) # index
|
||||
col2.write(models[model_name]['model_name'])
|
||||
col3.write(models[model_name]['save_location'])
|
||||
@ -88,7 +108,10 @@ def layout():
|
||||
download_file(models[model_name]['files'][file]['file_name'], models[model_name]['files'][file]['save_location'], models[model_name]['files'][file]['download_link'])
|
||||
else:
|
||||
download_file(models[model_name]['files'][file]['file_name'], models[model_name]['save_location'], models[model_name]['files'][file]['download_link'])
|
||||
st.experimental_rerun()
|
||||
else:
|
||||
st.empty()
|
||||
else:
|
||||
st.write('✅')
|
||||
st.write('✅')
|
||||
|
||||
#
|
||||
|
1263
scripts/Settings.py
1263
scripts/Settings.py
File diff suppressed because it is too large
Load Diff
@ -0,0 +1 @@
|
||||
from logger import set_logger_verbosity, quiesce_logger
|
11
scripts/custom_components/draggable_number_input/__init__.py
Normal file
11
scripts/custom_components/draggable_number_input/__init__.py
Normal file
@ -0,0 +1,11 @@
|
||||
import os
|
||||
import streamlit.components.v1 as components
|
||||
|
||||
def load(pixel_per_step = 50):
|
||||
parent_dir = os.path.dirname(os.path.abspath(__file__))
|
||||
file = os.path.join(parent_dir, "main.js")
|
||||
|
||||
with open(file) as f:
|
||||
javascript_main = f.read()
|
||||
javascript_main = javascript_main.replace("%%pixelPerStep%%",str(pixel_per_step))
|
||||
components.html(f"<script>{javascript_main}</script>")
|
192
scripts/custom_components/draggable_number_input/main.js
Normal file
192
scripts/custom_components/draggable_number_input/main.js
Normal file
@ -0,0 +1,192 @@
|
||||
// iframe parent
|
||||
var parentDoc = window.parent.document
|
||||
|
||||
// check for mouse pointer locking support, not a requirement but improves the overall experience
|
||||
var havePointerLock = 'pointerLockElement' in parentDoc ||
|
||||
'mozPointerLockElement' in parentDoc ||
|
||||
'webkitPointerLockElement' in parentDoc;
|
||||
|
||||
// the pointer locking exit function
|
||||
parentDoc.exitPointerLock = parentDoc.exitPointerLock || parentDoc.mozExitPointerLock || parentDoc.webkitExitPointerLock;
|
||||
|
||||
// how far should the mouse travel for a step in pixel
|
||||
var pixelPerStep = %%pixelPerStep%%;
|
||||
// how many steps did the mouse move in as float
|
||||
var movementDelta = 0.0;
|
||||
// value when drag started
|
||||
var lockedValue = 0.0;
|
||||
// minimum value from field
|
||||
var lockedMin = 0.0;
|
||||
// maximum value from field
|
||||
var lockedMax = 0.0;
|
||||
// how big should the field steps be
|
||||
var lockedStep = 0.0;
|
||||
// the currently locked in field
|
||||
var lockedField = null;
|
||||
|
||||
// lock box to just request pointer lock for one element
|
||||
var lockBox = document.createElement("div");
|
||||
lockBox.classList.add("lockbox");
|
||||
parentDoc.body.appendChild(lockBox);
|
||||
lockBox.requestPointerLock = lockBox.requestPointerLock || lockBox.mozRequestPointerLock || lockBox.webkitRequestPointerLock;
|
||||
|
||||
function Lock(field)
|
||||
{
|
||||
var rect = field.getBoundingClientRect();
|
||||
lockBox.style.left = (rect.left-2.5)+"px";
|
||||
lockBox.style.top = (rect.top-2.5)+"px";
|
||||
|
||||
lockBox.style.width = (rect.width+2.5)+"px";
|
||||
lockBox.style.height = (rect.height+5)+"px";
|
||||
|
||||
lockBox.requestPointerLock();
|
||||
}
|
||||
|
||||
function Unlock()
|
||||
{
|
||||
parentDoc.exitPointerLock();
|
||||
lockBox.style.left = "0px";
|
||||
lockBox.style.top = "0px";
|
||||
|
||||
lockBox.style.width = "0px";
|
||||
lockBox.style.height = "0px";
|
||||
lockedField.focus();
|
||||
}
|
||||
|
||||
parentDoc.addEventListener('mousedown', (e) => {
|
||||
// if middle is down
|
||||
if(e.button === 1)
|
||||
{
|
||||
if(e.target.tagName === 'INPUT' && e.target.type === 'number')
|
||||
{
|
||||
e.preventDefault();
|
||||
var field = e.target;
|
||||
if(havePointerLock)
|
||||
Lock(field);
|
||||
|
||||
// save current field
|
||||
lockedField = e.target;
|
||||
// add class for styling
|
||||
lockedField.classList.add("value-dragging");
|
||||
// reset movement delta
|
||||
movementDelta = 0.0;
|
||||
// set to 0 if field is empty
|
||||
if(lockedField.value === '')
|
||||
lockedField.value = 0.0;
|
||||
|
||||
// save current field value
|
||||
lockedValue = parseFloat(lockedField.value);
|
||||
|
||||
if(lockedField.min === '' || lockedField.min === '-Infinity')
|
||||
lockedMin = -99999999.0;
|
||||
else
|
||||
lockedMin = parseFloat(lockedField.min);
|
||||
|
||||
if(lockedField.max === '' || lockedField.max === 'Infinity')
|
||||
lockedMax = 99999999.0;
|
||||
else
|
||||
lockedMax = parseFloat(lockedField.max);
|
||||
|
||||
if(lockedField.step === '' || lockedField.step === 'Infinity')
|
||||
lockedStep = 1.0;
|
||||
else
|
||||
lockedStep = parseFloat(lockedField.step);
|
||||
|
||||
// lock pointer if available
|
||||
if(havePointerLock)
|
||||
Lock(lockedField);
|
||||
|
||||
// add drag event
|
||||
parentDoc.addEventListener("mousemove", onDrag, false);
|
||||
}
|
||||
}
|
||||
});
|
||||
|
||||
function onDrag(e)
|
||||
{
|
||||
if(lockedField !== null)
|
||||
{
|
||||
// add movement to delta
|
||||
movementDelta += e.movementX / pixelPerStep;
|
||||
if(lockedField === NaN)
|
||||
return;
|
||||
// set new value
|
||||
let value = lockedValue + Math.floor(Math.abs(movementDelta)) * lockedStep * Math.sign(movementDelta);
|
||||
lockedField.focus();
|
||||
lockedField.select();
|
||||
parentDoc.execCommand('insertText', false /*no UI*/, Math.min(Math.max(value, lockedMin), lockedMax));
|
||||
}
|
||||
}
|
||||
|
||||
parentDoc.addEventListener('mouseup', (e) => {
|
||||
// if mouse is up
|
||||
if(e.button === 1)
|
||||
{
|
||||
// release pointer lock if available
|
||||
if(havePointerLock)
|
||||
Unlock();
|
||||
|
||||
if(lockedField !== null && lockedField !== NaN)
|
||||
{
|
||||
// stop drag event
|
||||
parentDoc.removeEventListener("mousemove", onDrag, false);
|
||||
// remove class for styling
|
||||
lockedField.classList.remove("value-dragging");
|
||||
// remove reference
|
||||
lockedField = null;
|
||||
}
|
||||
}
|
||||
});
|
||||
|
||||
// only execute once (even though multiple iframes exist)
|
||||
if(!parentDoc.hasOwnProperty("dragableInitialized"))
|
||||
{
|
||||
var parentCSS =
|
||||
`
|
||||
/* Make input-instruction not block mouse events */
|
||||
.input-instructions,.input-instructions > *{
|
||||
pointer-events: none;
|
||||
user-select: none;
|
||||
-moz-user-select: none;
|
||||
-khtml-user-select: none;
|
||||
-webkit-user-select: none;
|
||||
-o-user-select: none;
|
||||
}
|
||||
|
||||
.lockbox {
|
||||
background-color: transparent;
|
||||
position: absolute;
|
||||
pointer-events: none;
|
||||
user-select: none;
|
||||
-moz-user-select: none;
|
||||
-khtml-user-select: none;
|
||||
-webkit-user-select: none;
|
||||
-o-user-select: none;
|
||||
border-left: dotted 2px rgb(255,75,75);
|
||||
border-top: dotted 2px rgb(255,75,75);
|
||||
border-bottom: dotted 2px rgb(255,75,75);
|
||||
border-right: dotted 1px rgba(255,75,75,0.2);
|
||||
border-top-left-radius: 0.25rem;
|
||||
border-bottom-left-radius: 0.25rem;
|
||||
z-index: 1000;
|
||||
}
|
||||
`;
|
||||
|
||||
// get parent document head
|
||||
var head = parentDoc.getElementsByTagName('head')[0];
|
||||
// add style tag
|
||||
var s = document.createElement('style');
|
||||
// set type attribute
|
||||
s.setAttribute('type', 'text/css');
|
||||
// add css forwarded from python
|
||||
if (s.styleSheet) { // IE
|
||||
s.styleSheet.cssText = parentCSS;
|
||||
} else { // the world
|
||||
s.appendChild(document.createTextNode(parentCSS));
|
||||
}
|
||||
// add style to head
|
||||
head.appendChild(s);
|
||||
// set flag so this only runs once
|
||||
parentDoc["dragableInitialized"] = true;
|
||||
}
|
||||
|
46
scripts/custom_components/key_phrase_suggestions/__init__.py
Normal file
46
scripts/custom_components/key_phrase_suggestions/__init__.py
Normal file
@ -0,0 +1,46 @@
|
||||
import os
|
||||
from collections import defaultdict
|
||||
import streamlit.components.v1 as components
|
||||
|
||||
# where to save the downloaded key_phrases
|
||||
key_phrases_file = "data/tags/key_phrases.json"
|
||||
# the loaded key phrase json as text
|
||||
key_phrases_json = ""
|
||||
# where to save the downloaded key_phrases
|
||||
thumbnails_file = "data/tags/thumbnails.json"
|
||||
# the loaded key phrase json as text
|
||||
thumbnails_json = ""
|
||||
|
||||
def init():
|
||||
global key_phrases_json, thumbnails_json
|
||||
with open(key_phrases_file) as f:
|
||||
key_phrases_json = f.read()
|
||||
with open(thumbnails_file) as f:
|
||||
thumbnails_json = f.read()
|
||||
|
||||
def suggestion_area(placeholder):
|
||||
# get component path
|
||||
parent_dir = os.path.dirname(os.path.abspath(__file__))
|
||||
# get file paths
|
||||
javascript_file = os.path.join(parent_dir, "main.js")
|
||||
stylesheet_file = os.path.join(parent_dir, "main.css")
|
||||
parent_stylesheet_file = os.path.join(parent_dir, "parent.css")
|
||||
|
||||
# load file texts
|
||||
with open(javascript_file) as f:
|
||||
javascript_main = f.read()
|
||||
with open(stylesheet_file) as f:
|
||||
stylesheet_main = f.read()
|
||||
with open(parent_stylesheet_file) as f:
|
||||
parent_stylesheet = f.read()
|
||||
|
||||
# add suggestion area div box
|
||||
html = "<div id='suggestion_area'>javascript failed</div>"
|
||||
# add loaded style
|
||||
html += f"<style>{stylesheet_main}</style>"
|
||||
# set default variables
|
||||
html += f"<script>var thumbnails = {thumbnails_json};\nvar keyPhrases = {key_phrases_json};\nvar parentCSS = `{parent_stylesheet}`;\nvar placeholder='{placeholder}';</script>"
|
||||
# add main java script
|
||||
html += f"\n<script>{javascript_main}</script>"
|
||||
# add component to site
|
||||
components.html(html, width=None, height=None, scrolling=True)
|
49
scripts/custom_components/key_phrase_suggestions/main.css
Normal file
49
scripts/custom_components/key_phrase_suggestions/main.css
Normal file
@ -0,0 +1,49 @@
|
||||
*
|
||||
{
|
||||
padding: 0px;
|
||||
margin: 0px;
|
||||
user-select: none;
|
||||
-moz-user-select: none;
|
||||
-khtml-user-select: none;
|
||||
-webkit-user-select: none;
|
||||
-o-user-select: none;
|
||||
}
|
||||
|
||||
body
|
||||
{
|
||||
width: 100%;
|
||||
height: 100%;
|
||||
}
|
||||
|
||||
#suggestionArea
|
||||
{
|
||||
overflow-y: auto;
|
||||
width: 100%;
|
||||
height: 100%;
|
||||
}
|
||||
|
||||
span
|
||||
{
|
||||
border: 1px solid rgba(250, 250, 250, 0.2);
|
||||
border-radius: 0.25rem;
|
||||
font-size: 1rem;
|
||||
font-family: "Source Sans Pro", sans-serif;
|
||||
|
||||
background-color: rgb(38, 39, 48);
|
||||
color: white;
|
||||
display: inline-block;
|
||||
padding: 0.5rem;
|
||||
margin-right: 3px;
|
||||
cursor: pointer;
|
||||
user-select: none;
|
||||
-moz-user-select: none;
|
||||
-khtml-user-select: none;
|
||||
-webkit-user-select: none;
|
||||
-o-user-select: none;
|
||||
}
|
||||
|
||||
span:hover
|
||||
{
|
||||
color: rgb(255,75,75);
|
||||
border-color: rgb(255,75,75);
|
||||
}
|
329
scripts/custom_components/key_phrase_suggestions/main.js
Normal file
329
scripts/custom_components/key_phrase_suggestions/main.js
Normal file
@ -0,0 +1,329 @@
|
||||
// parent document
|
||||
var parentDoc = window.parent.document;
|
||||
// iframe element in parent document
|
||||
var frame = window.frameElement;
|
||||
// the area to put the suggestions in
|
||||
var suggestionArea = document.getElementById('suggestion_area');
|
||||
// button height is read when the first button gets created
|
||||
var buttonHeight = -1;
|
||||
// the maximum size of the iframe in buttons (3 x buttons height)
|
||||
var maxHeightInButtons = 3;
|
||||
// the prompt field connected to this iframe
|
||||
var promptField = null;
|
||||
// the category of suggestions
|
||||
var activeCategory = [];
|
||||
|
||||
var conditionalButtons = null;
|
||||
|
||||
function currentFrameAbsolutePosition() {
|
||||
let currentWindow = window;
|
||||
let currentParentWindow;
|
||||
let positions = [];
|
||||
let rect;
|
||||
|
||||
while (currentWindow !== window.top) {
|
||||
currentParentWindow = currentWindow.parent;
|
||||
for (let idx = 0; idx < currentParentWindow.frames.length; idx++)
|
||||
if (currentParentWindow.frames[idx] === currentWindow) {
|
||||
for (let frameElement of currentParentWindow.document.getElementsByTagName('iframe')) {
|
||||
if (frameElement.contentWindow === currentWindow) {
|
||||
rect = frameElement.getBoundingClientRect();
|
||||
positions.push({x: rect.x, y: rect.y});
|
||||
}
|
||||
}
|
||||
currentWindow = currentParentWindow;
|
||||
break;
|
||||
}
|
||||
}
|
||||
return positions.reduce((accumulator, currentValue) => {
|
||||
return {
|
||||
x: accumulator.x + currentValue.x,
|
||||
y: accumulator.y + currentValue.y
|
||||
};
|
||||
}, { x: 0, y: 0 });
|
||||
}
|
||||
|
||||
// check if element is visible
|
||||
function isVisible(e) {
|
||||
return !!( e.offsetWidth || e.offsetHeight || e.getClientRects().length );
|
||||
}
|
||||
|
||||
// remove everything from the suggestion area
|
||||
function ClearSuggestionArea(text = "")
|
||||
{
|
||||
suggestionArea.innerHTML = text;
|
||||
conditionalButtons = [];
|
||||
}
|
||||
|
||||
// update iframe size depending on button rows
|
||||
function UpdateSize()
|
||||
{
|
||||
// calculate maximum height
|
||||
var maxHeight = buttonHeight * maxHeightInButtons;
|
||||
// apply height to iframe
|
||||
frame.style.height = Math.min(suggestionArea.offsetHeight,maxHeight)+"px";
|
||||
}
|
||||
|
||||
// add a button to the suggestion area
|
||||
function AddButton(label, action, dataTooltip="", tooltipImage="", pattern="", data="")
|
||||
{
|
||||
// create span
|
||||
var button = document.createElement("span");
|
||||
// label it
|
||||
button.innerHTML = label;
|
||||
if(data != "")
|
||||
{
|
||||
// add category attribute to button, will be read on click
|
||||
button.setAttribute("data",data);
|
||||
}
|
||||
if(pattern != "")
|
||||
{
|
||||
// add category attribute to button, will be read on click
|
||||
button.setAttribute("pattern",pattern);
|
||||
}
|
||||
if(dataTooltip != "")
|
||||
{
|
||||
// add category attribute to button, will be read on click
|
||||
button.setAttribute("tooltip-text",dataTooltip);
|
||||
}
|
||||
if(tooltipImage != "")
|
||||
{
|
||||
// add category attribute to button, will be read on click
|
||||
button.setAttribute("tooltip-image",tooltipImage);
|
||||
}
|
||||
// add button function
|
||||
button.addEventListener('click', action, false);
|
||||
button.addEventListener('mouseover', ButtonHoverEnter);
|
||||
button.addEventListener('mouseout', ButtonHoverExit);
|
||||
// add button to suggestion area
|
||||
suggestionArea.appendChild(button);
|
||||
// get buttonHeight if not set
|
||||
if(buttonHeight < 0)
|
||||
buttonHeight = button.offsetHeight;
|
||||
return button;
|
||||
}
|
||||
|
||||
// find visible prompt field to connect to this iframe
|
||||
function GetPromptField()
|
||||
{
|
||||
// get all prompt fields, the %% placeholder %% is set in python
|
||||
var all = parentDoc.querySelectorAll('textarea[placeholder="'+placeholder+'"]');
|
||||
// filter visible
|
||||
for(var i = 0; i < all.length; i++)
|
||||
{
|
||||
if(isVisible(all[i]))
|
||||
{
|
||||
promptField = all[i];
|
||||
promptField.addEventListener('input', OnChange, false);
|
||||
break;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
function OnChange(e)
|
||||
{
|
||||
ButtonConditions();
|
||||
}
|
||||
|
||||
// when pressing a button, give the focus back to the prompt field
|
||||
function KeepFocus(e)
|
||||
{
|
||||
e.preventDefault();
|
||||
promptField.focus();
|
||||
}
|
||||
|
||||
function selectCategory(e)
|
||||
{
|
||||
KeepFocus(e);
|
||||
// set category from attribute
|
||||
activeCategory = e.target.getAttribute("data");
|
||||
// rebuild menu
|
||||
ShowMenu();
|
||||
}
|
||||
|
||||
function leaveCategory(e)
|
||||
{
|
||||
KeepFocus(e);
|
||||
activeCategory = "";
|
||||
// rebuild menu
|
||||
ShowMenu();
|
||||
}
|
||||
|
||||
function SelectPhrase(e)
|
||||
{
|
||||
KeepFocus(e);
|
||||
var pattern = e.target.getAttribute("pattern");
|
||||
var entry = e.target.getAttribute("data");
|
||||
|
||||
// inserting via execCommand is required, this triggers all native browser functionality as if the user wrote into the prompt field.
|
||||
parentDoc.execCommand('insertText', false /*no UI*/, pattern.replace('{}',entry));
|
||||
}
|
||||
|
||||
function CheckButtonCondition(condition)
|
||||
{
|
||||
if(condition === "empty")
|
||||
{
|
||||
return promptField.value == "";
|
||||
}
|
||||
}
|
||||
|
||||
function ButtonConditions()
|
||||
{
|
||||
conditionalButtons.forEach(entry =>
|
||||
{
|
||||
if(CheckButtonCondition(entry.condition))
|
||||
entry.element.style.display = "inline-block";
|
||||
else
|
||||
entry.element.style.display = "none";
|
||||
});
|
||||
}
|
||||
|
||||
function ButtonHoverEnter(e)
|
||||
{
|
||||
var text = e.target.getAttribute("tooltip-text");
|
||||
var image = e.target.getAttribute("tooltip-image");
|
||||
ShowTooltip(text, e.target, image)
|
||||
}
|
||||
|
||||
function ButtonHoverExit(e)
|
||||
{
|
||||
HideTooltip();
|
||||
}
|
||||
|
||||
function ShowTooltip(text, target, image = "")
|
||||
{
|
||||
if((text == "" || text == null) && (image == "" || image == null || thumbnails[image] === undefined))
|
||||
return;
|
||||
|
||||
var currentFramePosition = currentFrameAbsolutePosition();
|
||||
var rect = target.getBoundingClientRect();
|
||||
var element = parentDoc["phraseTooltip"];
|
||||
element.innerText = text;
|
||||
if(image != "" && image != null && thumbnails[image] !== undefined)
|
||||
{
|
||||
|
||||
var img = parentDoc.createElement('img');
|
||||
console.log(image);
|
||||
img.src = "data:image/webp;base64, "+thumbnails[image];
|
||||
|
||||
console.log(thumbnails[image]);
|
||||
element.appendChild(img)
|
||||
}
|
||||
element.style.display = "flex";
|
||||
element.style.top = (rect.bottom+currentFramePosition.y)+"px";
|
||||
element.style.left = (rect.right+currentFramePosition.x)+"px";
|
||||
element.style.width = "inherit";
|
||||
element.style.height = "inherit";
|
||||
}
|
||||
|
||||
function HideTooltip()
|
||||
{
|
||||
var element = parentDoc["phraseTooltip"];
|
||||
element.style.display= "none";
|
||||
element.innerHTML = "";
|
||||
element.style.top = "0px";
|
||||
element.style.left = "0px";
|
||||
element.style.width = "0px";
|
||||
element.style.height = "0px";
|
||||
}
|
||||
|
||||
// generate menu in suggestion area
|
||||
function ShowMenu()
|
||||
{
|
||||
// clear all buttons from menu
|
||||
ClearSuggestionArea();
|
||||
HideTooltip();
|
||||
|
||||
// if no chategory is selected
|
||||
if(activeCategory == "")
|
||||
{
|
||||
for (var category in keyPhrases)
|
||||
{
|
||||
AddButton(category, selectCategory, keyPhrases[category]["description"], "", "", category);
|
||||
}
|
||||
// change iframe size after buttons have been added
|
||||
UpdateSize();
|
||||
}
|
||||
// if a chategory is selected
|
||||
else
|
||||
{
|
||||
// add a button to leave the chategory
|
||||
var backbutton = AddButton("↑ back", leaveCategory);
|
||||
var pattern = keyPhrases[activeCategory]["pattern"];
|
||||
keyPhrases[activeCategory]["entries"].forEach(entry =>
|
||||
{
|
||||
var tempPattern = pattern;
|
||||
if(entry["pattern_override"] != "")
|
||||
{
|
||||
tempPattern = entry["pattern_override"];
|
||||
}
|
||||
|
||||
var button = AddButton(entry["phrase"], SelectPhrase, entry["description"], entry["phrase"],tempPattern, entry["phrase"]);
|
||||
|
||||
if(entry["show_if"] != "")
|
||||
conditionalButtons.push({element:button,condition:entry["show_if"]});
|
||||
});
|
||||
// change iframe size after buttons have been added
|
||||
UpdateSize();
|
||||
ButtonConditions();
|
||||
}
|
||||
}
|
||||
|
||||
// listen for clicks on the prompt field
|
||||
parentDoc.addEventListener("click", (e) =>
|
||||
{
|
||||
// skip if this frame is not visible
|
||||
if(!isVisible(frame))
|
||||
return;
|
||||
|
||||
// if the iframes prompt field is not set, get it and set it
|
||||
if(promptField === null)
|
||||
GetPromptField();
|
||||
|
||||
// get the field with focus
|
||||
var target = parentDoc.activeElement;
|
||||
|
||||
// if the field with focus is a prompt field, the %% placeholder %% is set in python
|
||||
if( target.placeholder === placeholder)
|
||||
{
|
||||
// generate menu
|
||||
ShowMenu();
|
||||
}
|
||||
else
|
||||
{
|
||||
// else hide the iframe
|
||||
frame.style.height = "0px";
|
||||
}
|
||||
});
|
||||
|
||||
// add custom style to iframe
|
||||
frame.classList.add("suggestion-frame");
|
||||
// clear suggestion area to remove the "javascript failed" message
|
||||
ClearSuggestionArea();
|
||||
// collapse the iframe by default
|
||||
frame.style.height = "0px";
|
||||
|
||||
// only execute once (even though multiple iframes exist)
|
||||
if(!parentDoc.hasOwnProperty('keyPhraseSuggestionsInitialized'))
|
||||
{
|
||||
// get parent document head
|
||||
var head = parentDoc.getElementsByTagName('head')[0];
|
||||
// add style tag
|
||||
var s = parentDoc.createElement('style');
|
||||
// set type attribute
|
||||
s.setAttribute('type', 'text/css');
|
||||
// add css forwarded from python
|
||||
if (s.styleSheet) { // IE
|
||||
s.styleSheet.cssText = parentCSS;
|
||||
} else { // the world
|
||||
s.appendChild(parentDoc.createTextNode(parentCSS));
|
||||
}
|
||||
var tooltip = parentDoc.createElement('div');
|
||||
tooltip.id = "phrase-tooltip";
|
||||
parentDoc.body.appendChild(tooltip);
|
||||
parentDoc["phraseTooltip"] = tooltip;
|
||||
// add style to head
|
||||
head.appendChild(s);
|
||||
// set flag so this only runs once
|
||||
parentDoc["keyPhraseSuggestionsInitialized"] = true;
|
||||
}
|
69
scripts/custom_components/key_phrase_suggestions/parent.css
Normal file
69
scripts/custom_components/key_phrase_suggestions/parent.css
Normal file
@ -0,0 +1,69 @@
|
||||
.suggestion-frame
|
||||
{
|
||||
/* make as small as possible */
|
||||
padding: 0px !important;
|
||||
margin: 0px !important;
|
||||
min-height: 0px !important;
|
||||
line-height: 0;
|
||||
|
||||
/* animate transitions of the height property */
|
||||
-webkit-transition: height 1s;
|
||||
-moz-transition: height 1s;
|
||||
-ms-transition: height 1s;
|
||||
-o-transition: height 1s;
|
||||
transition: height 1s, y-overflow 300ms;
|
||||
|
||||
/* block selection */
|
||||
user-select: none;
|
||||
-moz-user-select: none;
|
||||
-khtml-user-select: none;
|
||||
-webkit-user-select: none;
|
||||
-o-user-select: none;
|
||||
}
|
||||
|
||||
#phrase-tooltip
|
||||
{
|
||||
display: none;
|
||||
pointer-events: none;
|
||||
position: absolute;
|
||||
border-bottom-left-radius: 0.5rem;
|
||||
border-top-right-radius: 0.5rem;
|
||||
border-bottom-right-radius: 0.5rem;
|
||||
border: solid rgb(255,75,75) 2px;
|
||||
background-color: rgb(38, 39, 48);
|
||||
color: rgb(255,75,75);
|
||||
font-size: 1rem;
|
||||
font-family: "Source Sans Pro", sans-serif;
|
||||
padding: 0.5rem;
|
||||
|
||||
cursor: default;
|
||||
user-select: none;
|
||||
-moz-user-select: none;
|
||||
-khtml-user-select: none;
|
||||
-webkit-user-select: none;
|
||||
-o-user-select: none;
|
||||
z-index: 1000;
|
||||
}
|
||||
|
||||
#phrase-tooltip:has(img)
|
||||
{
|
||||
transform: scale(1.25, 1.25);
|
||||
-ms-transform: scale(1.25, 1.25);
|
||||
-webkit-transform: scale(1.25, 1.25);
|
||||
}
|
||||
|
||||
#phrase-tooltip>img
|
||||
{
|
||||
pointer-events: none;
|
||||
border-bottom-left-radius: 0.5rem;
|
||||
border-top-right-radius: 0.5rem;
|
||||
border-bottom-right-radius: 0.5rem;
|
||||
|
||||
cursor: default;
|
||||
user-select: none;
|
||||
-moz-user-select: none;
|
||||
-khtml-user-select: none;
|
||||
-webkit-user-select: none;
|
||||
-o-user-select: none;
|
||||
z-index: 1500;
|
||||
}
|
766
scripts/hydrus_api/__init__.py
Normal file
766
scripts/hydrus_api/__init__.py
Normal file
@ -0,0 +1,766 @@
|
||||
# Copyright (C) 2021 cryzed
|
||||
#
|
||||
# This program is free software: you can redistribute it and/or modify
|
||||
# it under the terms of the GNU Affero General Public License as
|
||||
# published by the Free Software Foundation, either version 3 of the
|
||||
# License, or (at your option) any later version.
|
||||
#
|
||||
# This program is distributed in the hope that it will be useful,
|
||||
# but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
# GNU Affero General Public License for more details.
|
||||
#
|
||||
# You should have received a copy of the GNU Affero General Public License
|
||||
# along with this program. If not, see <http://www.gnu.org/licenses/>.
|
||||
|
||||
import enum
|
||||
import json
|
||||
import os
|
||||
import typing as T
|
||||
from collections import abc
|
||||
|
||||
import requests
|
||||
|
||||
__version__ = "4.0.0"
|
||||
|
||||
DEFAULT_API_URL = "http://127.0.0.1:45869/"
|
||||
HYDRUS_METADATA_ENCODING = "utf-8"
|
||||
AUTHENTICATION_TIMEOUT_CODE = 419
|
||||
|
||||
|
||||
class HydrusAPIException(Exception):
|
||||
pass
|
||||
|
||||
|
||||
class ConnectionError(HydrusAPIException, requests.ConnectTimeout):
|
||||
pass
|
||||
|
||||
|
||||
class APIError(HydrusAPIException):
|
||||
def __init__(self, response: requests.Response):
|
||||
super().__init__(response.text)
|
||||
self.response = response
|
||||
|
||||
|
||||
class MissingParameter(APIError):
|
||||
pass
|
||||
|
||||
|
||||
class InsufficientAccess(APIError):
|
||||
pass
|
||||
|
||||
|
||||
class DatabaseLocked(APIError):
|
||||
pass
|
||||
|
||||
|
||||
class ServerError(APIError):
|
||||
pass
|
||||
|
||||
|
||||
# Customize IntEnum, so we can just do str(Enum.member) to get the string representation of its value unmodified,
|
||||
# without users having to access .value explicitly
|
||||
class StringableIntEnum(enum.IntEnum):
|
||||
def __str__(self):
|
||||
return str(self.value)
|
||||
|
||||
|
||||
@enum.unique
|
||||
class Permission(StringableIntEnum):
|
||||
IMPORT_URLS = 0
|
||||
IMPORT_FILES = 1
|
||||
ADD_TAGS = 2
|
||||
SEARCH_FILES = 3
|
||||
MANAGE_PAGES = 4
|
||||
MANAGE_COOKIES = 5
|
||||
MANAGE_DATABASE = 6
|
||||
ADD_NOTES = 7
|
||||
|
||||
|
||||
@enum.unique
|
||||
class URLType(StringableIntEnum):
|
||||
POST_URL = 0
|
||||
FILE_URL = 2
|
||||
GALLERY_URL = 3
|
||||
WATCHABLE_URL = 4
|
||||
UNKNOWN_URL = 5
|
||||
|
||||
|
||||
@enum.unique
|
||||
class ImportStatus(StringableIntEnum):
|
||||
IMPORTABLE = 0
|
||||
SUCCESS = 1
|
||||
EXISTS = 2
|
||||
PREVIOUSLY_DELETED = 3
|
||||
FAILED = 4
|
||||
VETOED = 7
|
||||
|
||||
|
||||
@enum.unique
|
||||
class TagAction(StringableIntEnum):
|
||||
ADD = 0
|
||||
DELETE = 1
|
||||
PEND = 2
|
||||
RESCIND_PENDING = 3
|
||||
PETITION = 4
|
||||
RESCIND_PETITION = 5
|
||||
|
||||
|
||||
@enum.unique
|
||||
class TagStatus(StringableIntEnum):
|
||||
CURRENT = 0
|
||||
PENDING = 1
|
||||
DELETED = 2
|
||||
PETITIONED = 3
|
||||
|
||||
|
||||
@enum.unique
|
||||
class PageType(StringableIntEnum):
|
||||
GALLERY_DOWNLOADER = 1
|
||||
SIMPLE_DOWNLOADER = 2
|
||||
HARD_DRIVE_IMPORT = 3
|
||||
PETITIONS = 5
|
||||
FILE_SEARCH = 6
|
||||
URL_DOWNLOADER = 7
|
||||
DUPLICATES = 8
|
||||
THREAD_WATCHER = 9
|
||||
PAGE_OF_PAGES = 10
|
||||
|
||||
|
||||
@enum.unique
|
||||
class FileSortType(StringableIntEnum):
|
||||
FILE_SIZE = 0
|
||||
DURATION = 1
|
||||
IMPORT_TIME = 2
|
||||
FILE_TYPE = 3
|
||||
RANDOM = 4
|
||||
WIDTH = 5
|
||||
HEIGHT = 6
|
||||
RATIO = 7
|
||||
NUMBER_OF_PIXELS = 8
|
||||
NUMBER_OF_TAGS = 9
|
||||
NUMBER_OF_MEDIA_VIEWS = 10
|
||||
TOTAL_MEDIA_VIEWTIME = 11
|
||||
APPROXIMATE_BITRATE = 12
|
||||
HAS_AUDIO = 13
|
||||
MODIFIED_TIME = 14
|
||||
FRAMERATE = 15
|
||||
NUMBER_OF_FRAMES = 16
|
||||
|
||||
|
||||
class BinaryFileLike(T.Protocol):
|
||||
def read(self):
|
||||
...
|
||||
|
||||
|
||||
# The client should accept all objects that either support the iterable or mapping protocol. We must ensure that objects
|
||||
# are either lists or dicts, so Python's json module can handle them
|
||||
class JSONEncoder(json.JSONEncoder):
|
||||
def default(self, object_: T.Any):
|
||||
if isinstance(object_, abc.Mapping):
|
||||
return dict(object_)
|
||||
if isinstance(object_, abc.Iterable):
|
||||
return list(object_)
|
||||
return super().default(object_)
|
||||
|
||||
|
||||
class Client:
|
||||
VERSION = 31
|
||||
|
||||
# Access Management
|
||||
_GET_API_VERSION_PATH = "/api_version"
|
||||
_REQUEST_NEW_PERMISSIONS_PATH = "/request_new_permissions"
|
||||
_GET_SESSION_KEY_PATH = "/session_key"
|
||||
_VERIFY_ACCESS_KEY_PATH = "/verify_access_key"
|
||||
_GET_SERVICES_PATH = "/get_services"
|
||||
|
||||
# Adding Files
|
||||
_ADD_FILE_PATH = "/add_files/add_file"
|
||||
_DELETE_FILES_PATH = "/add_files/delete_files"
|
||||
_UNDELETE_FILES_PATH = "/add_files/undelete_files"
|
||||
_ARCHIVE_FILES_PATH = "/add_files/archive_files"
|
||||
_UNARCHIVE_FILES_PATH = "/add_files/unarchive_files"
|
||||
|
||||
# Adding Tags
|
||||
_CLEAN_TAGS_PATH = "/add_tags/clean_tags"
|
||||
_SEARCH_TAGS_PATH = "/add_tags/search_tags"
|
||||
_ADD_TAGS_PATH = "/add_tags/add_tags"
|
||||
|
||||
# Adding URLs
|
||||
_GET_URL_FILES_PATH = "/add_urls/get_url_files"
|
||||
_GET_URL_INFO_PATH = "/add_urls/get_url_info"
|
||||
_ADD_URL_PATH = "/add_urls/add_url"
|
||||
_ASSOCIATE_URL_PATH = "/add_urls/associate_url"
|
||||
|
||||
# Adding Notes
|
||||
_SET_NOTES_PATH = "/add_notes/set_notes"
|
||||
_DELETE_NOTES_PATH = "/add_notes/delete_notes"
|
||||
|
||||
# Managing Cookies and HTTP Headers
|
||||
_GET_COOKIES_PATH = "/manage_cookies/get_cookies"
|
||||
_SET_COOKIES_PATH = "/manage_cookies/set_cookies"
|
||||
_SET_USER_AGENT_PATH = "/manage_headers/set_user_agent"
|
||||
|
||||
# Managing Pages
|
||||
_GET_PAGES_PATH = "/manage_pages/get_pages"
|
||||
_GET_PAGE_INFO_PATH = "/manage_pages/get_page_info"
|
||||
_ADD_FILES_TO_PAGE_PATH = "/manage_pages/add_files"
|
||||
_FOCUS_PAGE_PATH = "/manage_pages/focus_page"
|
||||
|
||||
# Searching and Fetching Files
|
||||
_SEARCH_FILES_PATH = "/get_files/search_files"
|
||||
_GET_FILE_METADATA_PATH = "/get_files/file_metadata"
|
||||
_GET_FILE_PATH = "/get_files/file"
|
||||
_GET_THUMBNAIL_PATH = "/get_files/thumbnail"
|
||||
|
||||
# Managing the Database
|
||||
_LOCK_DATABASE_PATH = "/manage_database/lock_on"
|
||||
_UNLOCK_DATABASE_PATH = "/manage_database/lock_off"
|
||||
_MR_BONES_PATH = "/manage_database/mr_bones"
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
access_key = None,
|
||||
api_url: str = DEFAULT_API_URL,
|
||||
session = None,
|
||||
):
|
||||
"""
|
||||
See https://hydrusnetwork.github.io/hydrus/help/client_api.html for documentation.
|
||||
"""
|
||||
|
||||
self.access_key = access_key
|
||||
self.api_url = api_url.rstrip("/")
|
||||
self.session = session or requests.Session()
|
||||
|
||||
def _api_request(self, method: str, path: str, **kwargs: T.Any):
|
||||
if self.access_key is not None:
|
||||
kwargs.setdefault("headers", {}).update({"Hydrus-Client-API-Access-Key": self.access_key})
|
||||
|
||||
# Make sure we use our custom JSONEncoder that can serialize all objects that implement the iterable or mapping
|
||||
# protocol
|
||||
json_data = kwargs.pop("json", None)
|
||||
if json_data is not None:
|
||||
kwargs["data"] = json.dumps(json_data, cls=JSONEncoder)
|
||||
# Since we aren't using the json keyword-argument, we have to set the Content-Type manually
|
||||
kwargs["headers"]["Content-Type"] = "application/json"
|
||||
|
||||
try:
|
||||
response = self.session.request(method, self.api_url + path, **kwargs)
|
||||
except requests.RequestException as error:
|
||||
# Re-raise connection and timeout errors as hydrus.ConnectionErrors so these are more easy to handle for
|
||||
# client applications
|
||||
raise ConnectionError(*error.args)
|
||||
|
||||
try:
|
||||
response.raise_for_status()
|
||||
except requests.HTTPError:
|
||||
if response.status_code == requests.codes.bad_request:
|
||||
raise MissingParameter(response)
|
||||
elif response.status_code in {
|
||||
requests.codes.unauthorized,
|
||||
requests.codes.forbidden,
|
||||
AUTHENTICATION_TIMEOUT_CODE,
|
||||
}:
|
||||
raise InsufficientAccess(response)
|
||||
elif response.status_code == requests.codes.service_unavailable:
|
||||
raise DatabaseLocked(response)
|
||||
elif response.status_code == requests.codes.server_error:
|
||||
raise ServerError(response)
|
||||
raise APIError(response)
|
||||
|
||||
return response
|
||||
|
||||
def get_api_version(self):
|
||||
response = self._api_request("GET", self._GET_API_VERSION_PATH)
|
||||
return response.json()
|
||||
|
||||
def request_new_permissions(self, name, permissions):
|
||||
response = self._api_request(
|
||||
"GET",
|
||||
self._REQUEST_NEW_PERMISSIONS_PATH,
|
||||
params={"name": name, "basic_permissions": json.dumps(permissions, cls=JSONEncoder)},
|
||||
)
|
||||
return response.json()["access_key"]
|
||||
|
||||
def get_session_key(self):
|
||||
response = self._api_request("GET", self._GET_SESSION_KEY_PATH)
|
||||
return response.json()["session_key"]
|
||||
|
||||
def verify_access_key(self):
|
||||
response = self._api_request("GET", self._VERIFY_ACCESS_KEY_PATH)
|
||||
return response.json()
|
||||
|
||||
def get_services(self):
|
||||
response = self._api_request("GET", self._GET_SERVICES_PATH)
|
||||
return response.json()
|
||||
|
||||
def add_file(self, path_or_file: T.Union[str, os.PathLike, BinaryFileLike]):
|
||||
if isinstance(path_or_file, (str, os.PathLike)):
|
||||
response = self._api_request("POST", self._ADD_FILE_PATH, json={"path": os.fspath(path_or_file)})
|
||||
else:
|
||||
response = self._api_request(
|
||||
"POST",
|
||||
self._ADD_FILE_PATH,
|
||||
data=path_or_file.read(),
|
||||
headers={"Content-Type": "application/octet-stream"},
|
||||
)
|
||||
|
||||
return response.json()
|
||||
|
||||
def delete_files(
|
||||
self,
|
||||
hashes = None,
|
||||
file_ids = None,
|
||||
file_service_name = None,
|
||||
file_service_key = None,
|
||||
reason = None
|
||||
):
|
||||
if hashes is None and file_ids is None:
|
||||
raise ValueError("At least one of hashes, file_ids is required")
|
||||
if file_service_name is not None and file_service_key is not None:
|
||||
raise ValueError("Exactly one of file_service_name, file_service_key is required")
|
||||
|
||||
payload: dict[str, T.Any] = {}
|
||||
if hashes is not None:
|
||||
payload["hashes"] = hashes
|
||||
if file_ids is not None:
|
||||
payload["file_ids"] = file_ids
|
||||
if file_service_name is not None:
|
||||
payload["file_service_name"] = file_service_name
|
||||
if file_service_key is not None:
|
||||
payload["file_service_key"] = file_service_key
|
||||
if reason is not None:
|
||||
payload["reason"] = reason
|
||||
|
||||
self._api_request("POST", self._DELETE_FILES_PATH, json=payload)
|
||||
|
||||
def undelete_files(
|
||||
self,
|
||||
hashes = None,
|
||||
file_ids = None,
|
||||
file_service_name = None,
|
||||
file_service_key = None,
|
||||
):
|
||||
if hashes is None and file_ids is None:
|
||||
raise ValueError("At least one of hashes, file_ids is required")
|
||||
if file_service_name is not None and file_service_key is not None:
|
||||
raise ValueError("Exactly one of file_service_name, file_service_key is required")
|
||||
|
||||
payload: dict[str, T.Any] = {}
|
||||
if hashes is not None:
|
||||
payload["hashes"] = hashes
|
||||
if file_ids is not None:
|
||||
payload["file_ids"] = file_ids
|
||||
if file_service_name is not None:
|
||||
payload["file_service_name"] = file_service_name
|
||||
if file_service_key is not None:
|
||||
payload["file_service_key"] = file_service_key
|
||||
|
||||
self._api_request("POST", self._UNDELETE_FILES_PATH, json=payload)
|
||||
|
||||
def archive_files(
|
||||
self,
|
||||
hashes = None,
|
||||
file_ids = None
|
||||
):
|
||||
if hashes is None and file_ids is None:
|
||||
raise ValueError("At least one of hashes, file_ids is required")
|
||||
|
||||
payload: dict[str, T.Any] = {}
|
||||
if hashes is not None:
|
||||
payload["hashes"] = hashes
|
||||
if file_ids is not None:
|
||||
payload["file_ids"] = file_ids
|
||||
|
||||
self._api_request("POST", self._ARCHIVE_FILES_PATH, json=payload)
|
||||
|
||||
def unarchive_files(
|
||||
self,
|
||||
hashes = None,
|
||||
file_ids = None
|
||||
):
|
||||
if hashes is None and file_ids is None:
|
||||
raise ValueError("At least one of hashes, file_ids is required")
|
||||
|
||||
payload: dict[str, T.Any] = {}
|
||||
if hashes is not None:
|
||||
payload["hashes"] = hashes
|
||||
if file_ids is not None:
|
||||
payload["file_ids"] = file_ids
|
||||
|
||||
self._api_request("POST", self._UNARCHIVE_FILES_PATH, json=payload)
|
||||
|
||||
def clean_tags(self, tags ):
|
||||
response = self._api_request("GET", self._CLEAN_TAGS_PATH, params={"tags": json.dumps(tags, cls=JSONEncoder)})
|
||||
return response.json()["tags"]
|
||||
|
||||
def search_tags(
|
||||
self,
|
||||
search: str,
|
||||
tag_service_key = None,
|
||||
tag_service_name = None
|
||||
):
|
||||
if tag_service_name is not None and tag_service_key is not None:
|
||||
raise ValueError("Exactly one of tag_service_name, tag_service_key is required")
|
||||
|
||||
payload: dict[str, T.Any] = {"search": search}
|
||||
if tag_service_key is not None:
|
||||
payload["tag_service_key"] = tag_service_key
|
||||
if tag_service_name is not None:
|
||||
payload["tag_service_name"] = tag_service_name
|
||||
|
||||
response = self._api_request("GET", self._SEARCH_TAGS_PATH, params=payload)
|
||||
return response.json()["tags"]
|
||||
|
||||
def add_tags(
|
||||
self,
|
||||
hashes = None,
|
||||
file_ids = None,
|
||||
service_names_to_tags = None,
|
||||
service_keys_to_tags = None,
|
||||
service_names_to_actions_to_tags = None,
|
||||
service_keys_to_actions_to_tags = None,
|
||||
):
|
||||
if hashes is None and file_ids is None:
|
||||
raise ValueError("At least one of hashes, file_ids is required")
|
||||
if (
|
||||
service_names_to_tags is None
|
||||
and service_keys_to_tags is None
|
||||
and service_names_to_actions_to_tags is None
|
||||
and service_keys_to_actions_to_tags is None
|
||||
):
|
||||
raise ValueError(
|
||||
"At least one of service_names_to_tags, service_keys_to_tags, service_names_to_actions_to_tags or "
|
||||
"service_keys_to_actions_to_tags is required"
|
||||
)
|
||||
|
||||
payload: dict[str, T.Any] = {}
|
||||
if hashes is not None:
|
||||
payload["hashes"] = hashes
|
||||
if file_ids is not None:
|
||||
payload["file_ids"] = file_ids
|
||||
if service_names_to_tags is not None:
|
||||
payload["service_names_to_tags"] = service_names_to_tags
|
||||
if service_keys_to_tags is not None:
|
||||
payload["service_keys_to_tags"] = service_keys_to_tags
|
||||
if service_names_to_actions_to_tags is not None:
|
||||
payload["service_names_to_actions_to_tags"] = service_names_to_actions_to_tags
|
||||
if service_keys_to_actions_to_tags is not None:
|
||||
payload["service_keys_to_actions_to_tags"] = service_keys_to_actions_to_tags
|
||||
|
||||
self._api_request("POST", self._ADD_TAGS_PATH, json=payload)
|
||||
|
||||
def get_url_files(self, url: str):
|
||||
response = self._api_request("GET", self._GET_URL_FILES_PATH, params={"url": url})
|
||||
return response.json()
|
||||
|
||||
def get_url_info(self, url: str):
|
||||
response = self._api_request("GET", self._GET_URL_INFO_PATH, params={"url": url})
|
||||
return response.json()
|
||||
|
||||
def add_url(
|
||||
self,
|
||||
url: str,
|
||||
destination_page_key = None,
|
||||
destination_page_name = None,
|
||||
show_destination_page = None,
|
||||
service_names_to_additional_tags = None,
|
||||
service_keys_to_additional_tags = None,
|
||||
filterable_tags = None,
|
||||
):
|
||||
if destination_page_key is not None and destination_page_name is not None:
|
||||
raise ValueError("Exactly one of destination_page_key, destination_page_name is required")
|
||||
|
||||
payload: dict[str, T.Any] = {"url": url}
|
||||
if destination_page_key is not None:
|
||||
payload["destination_page_key"] = destination_page_key
|
||||
if destination_page_name is not None:
|
||||
payload["destination_page_name"] = destination_page_name
|
||||
if show_destination_page is not None:
|
||||
payload["show_destination_page"] = show_destination_page
|
||||
if service_names_to_additional_tags is not None:
|
||||
payload["service_names_to_additional_tags"] = service_names_to_additional_tags
|
||||
if service_keys_to_additional_tags is not None:
|
||||
payload["service_keys_to_additional_tags"] = service_keys_to_additional_tags
|
||||
if filterable_tags is not None:
|
||||
payload["filterable_tags"] = filterable_tags
|
||||
|
||||
response = self._api_request("POST", self._ADD_URL_PATH, json=payload)
|
||||
return response.json()
|
||||
|
||||
def associate_url(
|
||||
self,
|
||||
hashes = None,
|
||||
file_ids = None,
|
||||
urls_to_add = None,
|
||||
urls_to_delete = None,
|
||||
):
|
||||
if hashes is None and file_ids is None:
|
||||
raise ValueError("At least one of hashes, file_ids is required")
|
||||
if urls_to_add is None and urls_to_delete is None:
|
||||
raise ValueError("At least one of urls_to_add, urls_to_delete is required")
|
||||
|
||||
payload: dict[str, T.Any] = {}
|
||||
if hashes is not None:
|
||||
payload["hashes"] = hashes
|
||||
if file_ids is not None:
|
||||
payload["file_ids"] = file_ids
|
||||
if urls_to_add is not None:
|
||||
urls_to_add = urls_to_add
|
||||
payload["urls_to_add"] = urls_to_add
|
||||
if urls_to_delete is not None:
|
||||
urls_to_delete = urls_to_delete
|
||||
payload["urls_to_delete"] = urls_to_delete
|
||||
|
||||
self._api_request("POST", self._ASSOCIATE_URL_PATH, json=payload)
|
||||
|
||||
def set_notes(self, notes , hash_= None, file_id = None):
|
||||
if (hash_ is None and file_id is None) or (hash_ is not None and file_id is not None):
|
||||
raise ValueError("Exactly one of hash_, file_id is required")
|
||||
|
||||
payload: dict[str, T.Any] = {"notes": notes}
|
||||
if hash_ is not None:
|
||||
payload["hash"] = hash_
|
||||
if file_id is not None:
|
||||
payload["file_id"] = file_id
|
||||
|
||||
self._api_request("POST", self._SET_NOTES_PATH, json=payload)
|
||||
|
||||
def delete_notes(
|
||||
self,
|
||||
note_names ,
|
||||
hash_ = None,
|
||||
file_id = None
|
||||
):
|
||||
if (hash_ is None and file_id is None) or (hash_ is not None and file_id is not None):
|
||||
raise ValueError("Exactly one of hash_, file_id is required")
|
||||
|
||||
payload: dict[str, T.Any] = {"note_names": note_names}
|
||||
if hash_ is not None:
|
||||
payload["hash"] = hash_
|
||||
if file_id is not None:
|
||||
payload["file_id"] = file_id
|
||||
|
||||
self._api_request("POST", self._DELETE_NOTES_PATH, json=payload)
|
||||
|
||||
def get_cookies(self, domain: str):
|
||||
response = self._api_request("GET", self._GET_COOKIES_PATH, params={"domain": domain})
|
||||
return response.json()["cookies"]
|
||||
|
||||
def set_cookies(self, cookies ):
|
||||
self._api_request("POST", self._SET_COOKIES_PATH, json={"cookies": cookies})
|
||||
|
||||
def set_user_agent(self, user_agent: str):
|
||||
self._api_request("POST", self._SET_USER_AGENT_PATH, json={"user-agent": user_agent})
|
||||
|
||||
def get_pages(self):
|
||||
response = self._api_request("GET", self._GET_PAGES_PATH)
|
||||
return response.json()["pages"]
|
||||
|
||||
def get_page_info(self, page_key: str, simple = None):
|
||||
parameters = {"page_key": page_key}
|
||||
if simple is not None:
|
||||
parameters["simple"] = json.dumps(simple, cls=JSONEncoder)
|
||||
|
||||
response = self._api_request("GET", self._GET_PAGE_INFO_PATH, params=parameters)
|
||||
return response.json()["page_info"]
|
||||
|
||||
def add_files_to_page(
|
||||
self,
|
||||
page_key: str,
|
||||
file_ids = None,
|
||||
hashes = None
|
||||
):
|
||||
if file_ids is None and hashes is None:
|
||||
raise ValueError("At least one of file_ids, hashes is required")
|
||||
|
||||
payload: dict[str, T.Any] = {"page_key": page_key}
|
||||
if file_ids is not None:
|
||||
payload["file_ids"] = file_ids
|
||||
if hashes is not None:
|
||||
payload["hashes"] = hashes
|
||||
|
||||
self._api_request("POST", self._ADD_FILES_TO_PAGE_PATH, json=payload)
|
||||
|
||||
def focus_page(self, page_key: str):
|
||||
self._api_request("POST", self._FOCUS_PAGE_PATH, json={"page_key": page_key})
|
||||
|
||||
def search_files(
|
||||
self,
|
||||
tags,
|
||||
file_service_name = None,
|
||||
file_service_key = None,
|
||||
tag_service_name = None,
|
||||
tag_service_key = None,
|
||||
file_sort_type = None,
|
||||
file_sort_asc = None,
|
||||
return_hashes = None,
|
||||
):
|
||||
if file_service_name is not None and file_service_key is not None:
|
||||
raise ValueError("Exactly one of file_service_name, file_service_key is required")
|
||||
if tag_service_name is not None and tag_service_key is not None:
|
||||
raise ValueError("Exactly one of tag_service_name, tag_service_key is required")
|
||||
|
||||
parameters: dict[str, T.Union[str, int]] = {"tags": json.dumps(tags, cls=JSONEncoder)}
|
||||
if file_service_name is not None:
|
||||
parameters["file_service_name"] = file_service_name
|
||||
if file_service_key is not None:
|
||||
parameters["file_service_key"] = file_service_key
|
||||
|
||||
if tag_service_name is not None:
|
||||
parameters["tag_service_name"] = tag_service_name
|
||||
if tag_service_key is not None:
|
||||
parameters["tag_service_key"] = tag_service_key
|
||||
|
||||
if file_sort_type is not None:
|
||||
parameters["file_sort_type"] = file_sort_type
|
||||
if file_sort_asc is not None:
|
||||
parameters["file_sort_asc"] = json.dumps(file_sort_asc, cls=JSONEncoder)
|
||||
if return_hashes is not None:
|
||||
parameters["return_hashes"] = json.dumps(return_hashes, cls=JSONEncoder)
|
||||
|
||||
response = self._api_request("GET", self._SEARCH_FILES_PATH, params=parameters)
|
||||
return response.json()["hashes" if return_hashes else "file_ids"]
|
||||
|
||||
def get_file_metadata(
|
||||
self,
|
||||
hashes = None,
|
||||
file_ids = None,
|
||||
create_new_file_ids = None,
|
||||
only_return_identifiers = None,
|
||||
only_return_basic_information = None,
|
||||
detailed_url_information = None,
|
||||
hide_service_name_tags = None,
|
||||
include_notes = None
|
||||
):
|
||||
if hashes is None and file_ids is None:
|
||||
raise ValueError("At least one of hashes, file_ids is required")
|
||||
|
||||
parameters = {}
|
||||
if hashes is not None:
|
||||
parameters["hashes"] = json.dumps(hashes, cls=JSONEncoder)
|
||||
if file_ids is not None:
|
||||
parameters["file_ids"] = json.dumps(file_ids, cls=JSONEncoder)
|
||||
|
||||
if create_new_file_ids is not None:
|
||||
parameters["create_new_file_ids"] = json.dumps(create_new_file_ids, cls=JSONEncoder)
|
||||
if only_return_identifiers is not None:
|
||||
parameters["only_return_identifiers"] = json.dumps(only_return_identifiers, cls=JSONEncoder)
|
||||
if only_return_basic_information is not None:
|
||||
parameters["only_return_basic_information"] = json.dumps(only_return_basic_information, cls=JSONEncoder)
|
||||
if detailed_url_information is not None:
|
||||
parameters["detailed_url_information"] = json.dumps(detailed_url_information, cls=JSONEncoder)
|
||||
if hide_service_name_tags is not None:
|
||||
parameters["hide_service_name_tags"] = json.dumps(hide_service_name_tags, cls=JSONEncoder)
|
||||
if include_notes is not None:
|
||||
parameters["include_notes"] = json.dumps(include_notes, cls=JSONEncoder)
|
||||
|
||||
response = self._api_request("GET", self._GET_FILE_METADATA_PATH, params=parameters)
|
||||
return response.json()["metadata"]
|
||||
|
||||
def get_file(self, hash_ = None, file_id = None):
|
||||
if (hash_ is None and file_id is None) or (hash_ is not None and file_id is not None):
|
||||
raise ValueError("Exactly one of hash_, file_id is required")
|
||||
|
||||
parameters: dict[str, T.Union[str, int]] = {}
|
||||
if hash_ is not None:
|
||||
parameters["hash"] = hash_
|
||||
if file_id is not None:
|
||||
parameters["file_id"] = file_id
|
||||
|
||||
return self._api_request("GET", self._GET_FILE_PATH, params=parameters, stream=True)
|
||||
|
||||
def get_thumbnail(self, hash_ = None, file_id = None):
|
||||
if (hash_ is None and file_id is None) or (hash_ is not None and file_id is not None):
|
||||
raise ValueError("Exactly one of hash_, file_id is required")
|
||||
|
||||
parameters: dict[str, T.Union[str, int]] = {}
|
||||
if hash_ is not None:
|
||||
parameters["hash"] = hash_
|
||||
if file_id is not None:
|
||||
parameters["file_id"] = file_id
|
||||
|
||||
return self._api_request("GET", self._GET_THUMBNAIL_PATH, params=parameters, stream=True)
|
||||
|
||||
def lock_database(self):
|
||||
self._api_request("POST", self._LOCK_DATABASE_PATH)
|
||||
|
||||
def unlock_database(self):
|
||||
self._api_request("POST", self._UNLOCK_DATABASE_PATH)
|
||||
|
||||
def get_mr_bones(self):
|
||||
return self._api_request("GET", self._MR_BONES_PATH).json()["boned_stats"]
|
||||
|
||||
def add_and_tag_files(
|
||||
self,
|
||||
paths_or_files,
|
||||
tags ,
|
||||
service_names = None,
|
||||
service_keys = None,
|
||||
):
|
||||
"""Convenience method to add and tag multiple files at the same time.
|
||||
|
||||
If service_names and service_keys aren't specified, the default service name "my tags" will be used. If a file
|
||||
already exists in Hydrus, it will also be tagged.
|
||||
|
||||
Returns:
|
||||
list[dict[str, T.Any]]: Returns results of all `Client.add_file()` calls, matching the order of the
|
||||
paths_or_files iterable
|
||||
"""
|
||||
if service_names is None and service_keys is None:
|
||||
service_names = ("my tags",)
|
||||
|
||||
results = []
|
||||
hashes = set()
|
||||
for path_or_file in paths_or_files:
|
||||
result = self.add_file(path_or_file)
|
||||
results.append(result)
|
||||
if result["status"] != ImportStatus.FAILED:
|
||||
hashes.add(result["hash"])
|
||||
|
||||
service_names_to_tags = {name: tags for name in service_names} if service_names is not None else None
|
||||
service_keys_to_tags = {key: tags for key in service_keys} if service_keys is not None else None
|
||||
# Ignore type, we know that hashes only contains strings
|
||||
self.add_tags(hashes, service_names_to_tags=service_names_to_tags, service_keys_to_tags=service_keys_to_tags) # type: ignore
|
||||
return results
|
||||
|
||||
def get_page_list(self):
|
||||
"""Convenience method that returns a flattened version of the page tree from `Client.get_pages()`.
|
||||
|
||||
Returns:
|
||||
list[dict[str, T.Any]]: A list of every "pages" value in the page tree in pre-order (NLR)
|
||||
"""
|
||||
tree = self.get_pages()
|
||||
pages = []
|
||||
|
||||
def walk_tree(page: dict[str, T.Any]):
|
||||
pages.append(page)
|
||||
# Ignore type, we know that pages is always a list
|
||||
for sub_page in page.get("pages", ()): # type: ignore
|
||||
# Ignore type, we know that sub_page is always a dict
|
||||
walk_tree(sub_page) # type: ignore
|
||||
|
||||
walk_tree(tree)
|
||||
return pages
|
||||
|
||||
|
||||
__all__ = [
|
||||
"__version__",
|
||||
"DEFAULT_API_URL",
|
||||
"HYDRUS_METADATA_ENCODING",
|
||||
"HydrusAPIException",
|
||||
"ConnectionError",
|
||||
"APIError",
|
||||
"MissingParameter",
|
||||
"InsufficientAccess",
|
||||
"DatabaseLocked",
|
||||
"ServerError",
|
||||
"Permission",
|
||||
"URLType",
|
||||
"ImportStatus",
|
||||
"TagAction",
|
||||
"TagStatus",
|
||||
"PageType",
|
||||
"FileSortType",
|
||||
"Client",
|
||||
]
|
102
scripts/hydrus_api/utils.py
Normal file
102
scripts/hydrus_api/utils.py
Normal file
@ -0,0 +1,102 @@
|
||||
# Copyright (C) 2021 cryzed
|
||||
#
|
||||
# This program is free software: you can redistribute it and/or modify
|
||||
# it under the terms of the GNU Affero General Public License as
|
||||
# published by the Free Software Foundation, either version 3 of the
|
||||
# License, or (at your option) any later version.
|
||||
#
|
||||
# This program is distributed in the hope that it will be useful,
|
||||
# but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
# GNU Affero General Public License for more details.
|
||||
#
|
||||
# You should have received a copy of the GNU Affero General Public License
|
||||
# along with this program. If not, see <http://www.gnu.org/licenses/>.
|
||||
|
||||
import collections
|
||||
import os
|
||||
import typing as T
|
||||
from collections import abc
|
||||
|
||||
from hydrus_api import DEFAULT_API_URL, HYDRUS_METADATA_ENCODING, Client, Permission
|
||||
|
||||
X = T.TypeVar("X")
|
||||
|
||||
|
||||
class TextFileLike(T.Protocol):
|
||||
def read(self) -> str:
|
||||
pass
|
||||
|
||||
|
||||
def verify_permissions(
|
||||
client: Client, permissions: abc.Iterable[T.Union[int, Permission]], exact: bool = False
|
||||
) -> bool:
|
||||
granted_permissions = set(client.verify_access_key()["basic_permissions"])
|
||||
return granted_permissions == set(permissions) if exact else granted_permissions.issuperset(permissions)
|
||||
|
||||
|
||||
def cli_request_api_key(
|
||||
name: str,
|
||||
permissions: abc.Iterable[T.Union[int, Permission]],
|
||||
verify: bool = True,
|
||||
exact: bool = False,
|
||||
api_url: str = DEFAULT_API_URL,
|
||||
) -> str:
|
||||
while True:
|
||||
input(
|
||||
'Navigate to "services->review services->local->client api" in the Hydrus client and click "add->from api '
|
||||
'request". Then press enter to continue...'
|
||||
)
|
||||
access_key = Client(api_url=api_url).request_new_permissions(name, permissions)
|
||||
input("Press OK and then apply in the Hydrus client dialog. Then press enter to continue...")
|
||||
|
||||
client = Client(access_key, api_url)
|
||||
if verify and not verify_permissions(client, permissions, exact):
|
||||
granted = client.verify_access_key()["basic_permissions"]
|
||||
print(
|
||||
f"The granted permissions ({granted}) differ from the requested permissions ({permissions}), please "
|
||||
"grant all requested permissions."
|
||||
)
|
||||
continue
|
||||
|
||||
return access_key
|
||||
|
||||
|
||||
def parse_hydrus_metadata(text: str) -> collections.defaultdict[T.Optional[str], set[str]]:
|
||||
namespaces = collections.defaultdict(set)
|
||||
for line in (line.strip() for line in text.splitlines()):
|
||||
if not line:
|
||||
continue
|
||||
|
||||
parts = line.split(":", 1)
|
||||
namespace, tag = (None, line) if len(parts) == 1 else parts
|
||||
namespaces[namespace].add(tag)
|
||||
|
||||
# Ignore type, mypy has trouble figuring out that tag isn't optional
|
||||
return namespaces # type: ignore
|
||||
|
||||
|
||||
def parse_hydrus_metadata_file(
|
||||
path_or_file: T.Union[str, os.PathLike, TextFileLike]
|
||||
) -> collections.defaultdict[T.Optional[str], set[str]]:
|
||||
if isinstance(path_or_file, (str, os.PathLike)):
|
||||
with open(path_or_file, encoding=HYDRUS_METADATA_ENCODING) as file:
|
||||
return parse_hydrus_metadata(file.read())
|
||||
|
||||
return parse_hydrus_metadata(path_or_file.read())
|
||||
|
||||
|
||||
# Useful for splitting up requests to get_file_metadata()
|
||||
def yield_chunks(sequence: T.Sequence[X], chunk_size: int, offset: int = 0) -> T.Generator[T.Sequence[X], None, None]:
|
||||
while offset < len(sequence):
|
||||
yield sequence[offset : offset + chunk_size]
|
||||
offset += chunk_size
|
||||
|
||||
|
||||
__all__ = [
|
||||
"verify_permissions",
|
||||
"cli_request_api_key",
|
||||
"parse_hydrus_metadata",
|
||||
"parse_hydrus_metadata_file",
|
||||
"yield_chunks",
|
||||
]
|
@ -30,12 +30,17 @@ import torch
|
||||
import skimage
|
||||
from ldm.models.diffusion.ddim import DDIMSampler
|
||||
from ldm.models.diffusion.plms import PLMSSampler
|
||||
|
||||
# streamlit components
|
||||
from custom_components import key_phrase_suggestions
|
||||
|
||||
# Temp imports
|
||||
|
||||
|
||||
# end of imports
|
||||
#---------------------------------------------------------------------------------------------------------------
|
||||
|
||||
key_phrase_suggestions.init()
|
||||
|
||||
try:
|
||||
# this silences the annoying "Some weights of the model checkpoint were not used when initializing..." message at start.
|
||||
@ -49,7 +54,7 @@ def img2img(prompt: str = '', init_info: any = None, init_info_mask: any = None,
|
||||
mask_restore: bool = False, ddim_steps: int = 50, sampler_name: str = 'DDIM',
|
||||
n_iter: int = 1, cfg_scale: float = 7.5, denoising_strength: float = 0.8,
|
||||
seed: int = -1, noise_mode: int = 0, find_noise_steps: str = "", height: int = 512, width: int = 512, resize_mode: int = 0, fp = None,
|
||||
variant_amount: float = None, variant_seed: int = None, ddim_eta:float = 0.0,
|
||||
variant_amount: float = 0.0, variant_seed: int = None, ddim_eta:float = 0.0,
|
||||
write_info_files:bool = True, separate_prompts:bool = False, normalize_prompt_weights:bool = True,
|
||||
save_individual_images: bool = True, save_grid: bool = True, group_by_prompt: bool = True,
|
||||
save_as_jpg: bool = True, use_GFPGAN: bool = True, GFPGAN_model: str = 'GFPGANv1.4',
|
||||
@ -202,7 +207,7 @@ def img2img(prompt: str = '', init_info: any = None, init_info_mask: any = None,
|
||||
samples_ddim = K.sampling.__dict__[f'sample_{sampler.get_sampler_name()}'](model_wrap_cfg, xi, sigma_sched,
|
||||
extra_args={'cond': conditioning, 'uncond': unconditional_conditioning,
|
||||
'cond_scale': cfg_scale, 'mask': z_mask, 'x0': x0, 'xi': xi}, disable=False,
|
||||
callback=generation_callback)
|
||||
callback=generation_callback if not server_state["bridge"] else None)
|
||||
else:
|
||||
|
||||
x0, z_mask = init_data
|
||||
@ -234,7 +239,7 @@ def img2img(prompt: str = '', init_info: any = None, init_info_mask: any = None,
|
||||
from skimage import exposure
|
||||
do_color_correction = True
|
||||
except:
|
||||
print("Install scikit-image to perform color correction on loopback")
|
||||
logger.error("Install scikit-image to perform color correction on loopback")
|
||||
|
||||
for i in range(n_iter):
|
||||
if do_color_correction and i == 0:
|
||||
@ -365,7 +370,9 @@ def layout():
|
||||
img2img_input_col, img2img_generate_col = st.columns([10,1])
|
||||
with img2img_input_col:
|
||||
#prompt = st.text_area("Input Text","")
|
||||
prompt = st.text_area("Input Text","", placeholder="A corgi wearing a top hat as an oil painting.")
|
||||
placeholder = "A corgi wearing a top hat as an oil painting."
|
||||
prompt = st.text_area("Input Text","", placeholder=placeholder, height=54)
|
||||
key_phrase_suggestions.suggestion_area(placeholder)
|
||||
|
||||
# Every form must have a submit button, the extra blank spaces is a temp way to align it with the input field. Needs to be done in CSS or some other way.
|
||||
img2img_generate_col.write("")
|
||||
@ -374,7 +381,7 @@ def layout():
|
||||
|
||||
|
||||
# creating the page layout using columns
|
||||
col1_img2img_layout, col2_img2img_layout, col3_img2img_layout = st.columns([1,2,2], gap="small")
|
||||
col1_img2img_layout, col2_img2img_layout, col3_img2img_layout = st.columns([1,2,2], gap="medium")
|
||||
|
||||
with col1_img2img_layout:
|
||||
# If we have custom models available on the "models/custom"
|
||||
@ -386,9 +393,9 @@ def layout():
|
||||
help="Select the model you want to use. This option is only available if you have custom models \
|
||||
on your 'models/custom' folder. The model name that will be shown here is the same as the name\
|
||||
the file for the model has on said folder, it is recommended to give the .ckpt file a name that \
|
||||
will make it easier for you to distinguish it from other models. Default: Stable Diffusion v1.4")
|
||||
will make it easier for you to distinguish it from other models. Default: Stable Diffusion v1.5")
|
||||
else:
|
||||
st.session_state["custom_model"] = "Stable Diffusion v1.4"
|
||||
st.session_state["custom_model"] = "Stable Diffusion v1.5"
|
||||
|
||||
|
||||
st.session_state["sampling_steps"] = st.number_input("Sampling Steps", value=st.session_state['defaults'].img2img.sampling_steps.value,
|
||||
@ -405,23 +412,24 @@ def layout():
|
||||
value=st.session_state['defaults'].img2img.height.value, step=st.session_state['defaults'].img2img.height.step)
|
||||
seed = st.text_input("Seed:", value=st.session_state['defaults'].img2img.seed, help=" The seed to use, if left blank a random seed will be generated.")
|
||||
|
||||
cfg_scale = st.slider("CFG (Classifier Free Guidance Scale):", min_value=st.session_state['defaults'].img2img.cfg_scale.min_value,
|
||||
max_value=st.session_state['defaults'].img2img.cfg_scale.max_value, value=st.session_state['defaults'].img2img.cfg_scale.value,
|
||||
step=st.session_state['defaults'].img2img.cfg_scale.step, help="How strongly the image should follow the prompt.")
|
||||
cfg_scale = st.number_input("CFG (Classifier Free Guidance Scale):", min_value=st.session_state['defaults'].img2img.cfg_scale.min_value,
|
||||
value=st.session_state['defaults'].img2img.cfg_scale.value,
|
||||
step=st.session_state['defaults'].img2img.cfg_scale.step,
|
||||
help="How strongly the image should follow the prompt.")
|
||||
|
||||
st.session_state["denoising_strength"] = st.slider("Denoising Strength:", value=st.session_state['defaults'].img2img.denoising_strength.value,
|
||||
min_value=st.session_state['defaults'].img2img.denoising_strength.min_value,
|
||||
max_value=st.session_state['defaults'].img2img.denoising_strength.max_value,
|
||||
step=st.session_state['defaults'].img2img.denoising_strength.step)
|
||||
min_value=st.session_state['defaults'].img2img.denoising_strength.min_value,
|
||||
max_value=st.session_state['defaults'].img2img.denoising_strength.max_value,
|
||||
step=st.session_state['defaults'].img2img.denoising_strength.step)
|
||||
|
||||
|
||||
mask_expander = st.empty()
|
||||
with mask_expander.expander("Mask"):
|
||||
mask_mode_list = ["Mask", "Inverted mask", "Image alpha"]
|
||||
mask_mode = st.selectbox("Mask Mode", mask_mode_list,
|
||||
help="Select how you want your image to be masked.\"Mask\" modifies the image where the mask is white.\n\
|
||||
\"Inverted mask\" modifies the image where the mask is black. \"Image alpha\" modifies the image where the image is transparent."
|
||||
)
|
||||
help="Select how you want your image to be masked.\"Mask\" modifies the image where the mask is white.\n\
|
||||
\"Inverted mask\" modifies the image where the mask is black. \"Image alpha\" modifies the image where the image is transparent."
|
||||
)
|
||||
mask_mode = mask_mode_list.index(mask_mode)
|
||||
|
||||
|
||||
@ -431,26 +439,26 @@ def layout():
|
||||
help=""
|
||||
)
|
||||
noise_mode = noise_mode_list.index(noise_mode)
|
||||
find_noise_steps = st.slider("Find Noise Steps", value=st.session_state['defaults'].img2img.find_noise_steps.value,
|
||||
min_value=st.session_state['defaults'].img2img.find_noise_steps.min_value, max_value=st.session_state['defaults'].img2img.find_noise_steps.max_value,
|
||||
find_noise_steps = st.number_input("Find Noise Steps", value=st.session_state['defaults'].img2img.find_noise_steps.value,
|
||||
min_value=st.session_state['defaults'].img2img.find_noise_steps.min_value,
|
||||
step=st.session_state['defaults'].img2img.find_noise_steps.step)
|
||||
|
||||
with st.expander("Batch Options"):
|
||||
st.session_state["batch_count"] = st.number_input("Batch count.", value=st.session_state['defaults'].img2img.batch_count.value,
|
||||
help="How many iterations or batches of images to generate in total.")
|
||||
help="How many iterations or batches of images to generate in total.")
|
||||
|
||||
st.session_state["batch_size"] = st.number_input("Batch size", value=st.session_state.defaults.img2img.batch_size.value,
|
||||
help="How many images are at once in a batch.\
|
||||
It increases the VRAM usage a lot but if you have enough VRAM it can reduce the time it takes to finish generation as more images are generated at once.\
|
||||
Default: 1")
|
||||
help="How many images are at once in a batch.\
|
||||
It increases the VRAM usage a lot but if you have enough VRAM it can reduce the time it takes to finish generation as more images are generated at once.\
|
||||
Default: 1")
|
||||
|
||||
with st.expander("Preview Settings"):
|
||||
st.session_state["update_preview"] = st.session_state["defaults"].general.update_preview
|
||||
st.session_state["update_preview_frequency"] = st.number_input("Update Image Preview Frequency",
|
||||
min_value=1,
|
||||
value=st.session_state['defaults'].img2img.update_preview_frequency,
|
||||
help="Frequency in steps at which the the preview image is updated. By default the frequency \
|
||||
is set to 1 step.")
|
||||
min_value=1,
|
||||
value=st.session_state['defaults'].img2img.update_preview_frequency,
|
||||
help="Frequency in steps at which the the preview image is updated. By default the frequency \
|
||||
is set to 1 step.")
|
||||
#
|
||||
with st.expander("Advanced"):
|
||||
with st.expander("Output Settings"):
|
||||
@ -687,7 +695,7 @@ def layout():
|
||||
message.success('Render Complete: ' + info + '; Stats: ' + stats, icon="✅")
|
||||
|
||||
except (StopException, KeyError):
|
||||
print(f"Received Streamlit StopException")
|
||||
logger.info(f"Received Streamlit StopException")
|
||||
|
||||
# this will render all the images at the end of the generation but its better if its moved to a second tab inside col2 and shown as a gallery.
|
||||
# use the current col2 first tab to show the preview_img and update it as its generated.
|
||||
|
@ -18,7 +18,7 @@
|
||||
"""
|
||||
CLIP Interrogator made by @pharmapsychotic modified to work with our WebUI.
|
||||
|
||||
# CLIP Interrogator by @pharmapsychotic
|
||||
# CLIP Interrogator by @pharmapsychotic
|
||||
Twitter: https://twitter.com/pharmapsychotic
|
||||
Github: https://github.com/pharmapsychotic/clip-interrogator
|
||||
|
||||
@ -54,6 +54,7 @@ from PIL import Image
|
||||
from torchvision import transforms
|
||||
from torchvision.transforms.functional import InterpolationMode
|
||||
from ldm.models.blip import blip_decoder
|
||||
#import hashlib
|
||||
|
||||
# end of imports
|
||||
# ---------------------------------------------------------------------------------------------------------------
|
||||
@ -64,25 +65,30 @@ blip_image_eval_size = 512
|
||||
server_state["clip_models"] = {}
|
||||
server_state["preprocesses"] = {}
|
||||
|
||||
st.session_state["log"] = []
|
||||
|
||||
def load_blip_model():
|
||||
print("Loading BLIP Model")
|
||||
st.session_state["log_message"].code("Loading BLIP Model", language='')
|
||||
logger.info("Loading BLIP Model")
|
||||
st.session_state["log"].append("Loading BLIP Model")
|
||||
st.session_state["log_message"].code('\n'.join(st.session_state["log"]), language='')
|
||||
|
||||
if "blip_model" not in server_state:
|
||||
with server_state_lock['blip_model']:
|
||||
server_state["blip_model"] = blip_decoder(pretrained="models/blip/model__base_caption.pth",
|
||||
image_size=blip_image_eval_size, vit='base', med_config="configs/blip/med_config.json")
|
||||
|
||||
|
||||
server_state["blip_model"] = server_state["blip_model"].eval()
|
||||
|
||||
|
||||
#if not st.session_state["defaults"].general.optimized:
|
||||
server_state["blip_model"] = server_state["blip_model"].to(device).half()
|
||||
|
||||
print("BLIP Model Loaded")
|
||||
st.session_state["log_message"].code("BLIP Model Loaded", language='')
|
||||
|
||||
logger.info("BLIP Model Loaded")
|
||||
st.session_state["log"].append("BLIP Model Loaded")
|
||||
st.session_state["log_message"].code('\n'.join(st.session_state["log"]), language='')
|
||||
else:
|
||||
print("BLIP Model already loaded")
|
||||
st.session_state["log_message"].code("BLIP Model Already Loaded", language='')
|
||||
logger.info("BLIP Model already loaded")
|
||||
st.session_state["log"].append("BLIP Model already loaded")
|
||||
st.session_state["log_message"].code('\n'.join(st.session_state["log"]), language='')
|
||||
|
||||
#return server_state["blip_model"]
|
||||
|
||||
@ -92,54 +98,54 @@ def artstation_links():
|
||||
"""Find and save every artstation link for the first 500 pages of the explore page."""
|
||||
# collecting links to the list()
|
||||
links = []
|
||||
|
||||
|
||||
with open('data/img2txt/artstation_links.txt', 'w') as f:
|
||||
for page_num in range(1,500):
|
||||
response = requests.get(f'https://www.artstation.com/api/v2/community/explore/projects/trending.json?page={page_num}&dimension=all&per_page=100').text
|
||||
# open json response
|
||||
data = json.loads(response)
|
||||
|
||||
|
||||
# loopinh through json response
|
||||
for result in data['data']:
|
||||
# still looping and grabbing url's
|
||||
url = result['url']
|
||||
links.append(url)
|
||||
# writing each link on the new line (\n)
|
||||
f.write(f'{url}\n')
|
||||
f.write(f'{url}\n')
|
||||
return links
|
||||
#
|
||||
def artstation_users():
|
||||
"""Get all the usernames and full name of the users on the first 500 pages of artstation explore page."""
|
||||
# collect username and full name
|
||||
artists = []
|
||||
|
||||
|
||||
# opening a .txt file
|
||||
with open('data/img2txt/artstation_artists.txt', 'w') as f:
|
||||
for page_num in range(1,500):
|
||||
response = requests.get(f'https://www.artstation.com/api/v2/community/explore/projects/trending.json?page={page_num}&dimension=all&per_page=100').text
|
||||
# open json response
|
||||
data = json.loads(response)
|
||||
|
||||
|
||||
|
||||
|
||||
# loopinh through json response
|
||||
for item in data['data']:
|
||||
#print (item['user'])
|
||||
username = item['user']['username']
|
||||
full_name = item['user']['full_name']
|
||||
|
||||
|
||||
# still looping and grabbing url's
|
||||
artists.append(username)
|
||||
artists.append(full_name)
|
||||
# writing each link on the new line (\n)
|
||||
f.write(f'{slugify(username)}\n')
|
||||
f.write(f'{slugify(full_name)}\n')
|
||||
|
||||
|
||||
return artists
|
||||
|
||||
def generate_caption(pil_image):
|
||||
|
||||
load_blip_model()
|
||||
|
||||
|
||||
gpu_image = transforms.Compose([ # type: ignore
|
||||
transforms.Resize((blip_image_eval_size, blip_image_eval_size), interpolation=InterpolationMode.BICUBIC), # type: ignore
|
||||
transforms.ToTensor(), # type: ignore
|
||||
@ -189,39 +195,42 @@ def batch_rank(model, image_features, text_array, batch_size=st.session_state["d
|
||||
|
||||
def interrogate(image, models):
|
||||
|
||||
#server_state["blip_model"] =
|
||||
#server_state["blip_model"] =
|
||||
load_blip_model()
|
||||
|
||||
print("Generating Caption")
|
||||
st.session_state["log_message"].code("Generating Caption", language='')
|
||||
|
||||
logger.info("Generating Caption")
|
||||
st.session_state["log"].append("Generating Caption")
|
||||
st.session_state["log_message"].code('\n'.join(st.session_state["log"]), language='')
|
||||
caption = generate_caption(image)
|
||||
|
||||
if st.session_state["defaults"].general.optimized:
|
||||
del server_state["blip_model"]
|
||||
clear_cuda()
|
||||
|
||||
print("Caption Generated")
|
||||
st.session_state["log_message"].code("Caption Generated", language='')
|
||||
logger.info("Caption Generated")
|
||||
st.session_state["log"].append("Caption Generated")
|
||||
st.session_state["log_message"].code('\n'.join(st.session_state["log"]), language='')
|
||||
|
||||
if len(models) == 0:
|
||||
print(f"\n\n{caption}")
|
||||
logger.info(f"\n\n{caption}")
|
||||
return
|
||||
|
||||
table = []
|
||||
bests = [[('', 0)]]*5
|
||||
|
||||
print("Ranking Text")
|
||||
|
||||
logger.info("Ranking Text")
|
||||
|
||||
#if "clip_model" in server_state:
|
||||
#print (server_state["clip_model"])
|
||||
|
||||
|
||||
#print (st.session_state["log_message"])
|
||||
|
||||
|
||||
for model_name in models:
|
||||
with torch.no_grad(), torch.autocast('cuda', dtype=torch.float16):
|
||||
print(f"Interrogating with {model_name}...")
|
||||
st.session_state["log_message"].code(f"Interrogating with {model_name}...", language='')
|
||||
|
||||
logger.info(f"Interrogating with {model_name}...")
|
||||
st.session_state["log"].append(f"Interrogating with {model_name}...")
|
||||
st.session_state["log_message"].code('\n'.join(st.session_state["log"]), language='')
|
||||
|
||||
if model_name not in server_state["clip_models"]:
|
||||
if not st.session_state["defaults"].img2txt.keep_all_models_loaded:
|
||||
model_to_delete = []
|
||||
@ -233,23 +242,27 @@ def interrogate(image, models):
|
||||
del server_state["preprocesses"][model]
|
||||
clear_cuda()
|
||||
if model_name == 'ViT-H-14':
|
||||
server_state["clip_models"][model_name], _, server_state["preprocesses"][model_name] = open_clip.create_model_and_transforms(model_name, pretrained='laion2b_s32b_b79k', cache_dir='models/clip')
|
||||
server_state["clip_models"][model_name], _, server_state["preprocesses"][model_name] = open_clip.create_model_and_transforms(model_name,
|
||||
pretrained='laion2b_s32b_b79k',
|
||||
cache_dir='models/clip')
|
||||
elif model_name == 'ViT-g-14':
|
||||
server_state["clip_models"][model_name], _, server_state["preprocesses"][model_name] = open_clip.create_model_and_transforms(model_name, pretrained='laion2b_s12b_b42k', cache_dir='models/clip')
|
||||
server_state["clip_models"][model_name], _, server_state["preprocesses"][model_name] = open_clip.create_model_and_transforms(model_name,
|
||||
pretrained='laion2b_s12b_b42k',
|
||||
cache_dir='models/clip')
|
||||
else:
|
||||
server_state["clip_models"][model_name], server_state["preprocesses"][model_name] = clip.load(model_name, device=device, download_root='models/clip')
|
||||
server_state["clip_models"][model_name] = server_state["clip_models"][model_name].cuda().eval()
|
||||
|
||||
|
||||
images = server_state["preprocesses"][model_name](image).unsqueeze(0).cuda()
|
||||
|
||||
|
||||
|
||||
|
||||
image_features = server_state["clip_models"][model_name].encode_image(images).float()
|
||||
|
||||
|
||||
image_features /= image_features.norm(dim=-1, keepdim=True)
|
||||
|
||||
if st.session_state["defaults"].general.optimized:
|
||||
clear_cuda()
|
||||
|
||||
|
||||
ranks = []
|
||||
ranks.append(batch_rank(server_state["clip_models"][model_name], image_features, server_state["mediums"]))
|
||||
ranks.append(batch_rank(server_state["clip_models"][model_name], image_features, ["by "+artist for artist in server_state["artists"]]))
|
||||
@ -265,6 +278,9 @@ def interrogate(image, models):
|
||||
# ranks.append(batch_rank(server_state["clip_models"][model_name], image_features, server_state["themes"]))
|
||||
# ranks.append(batch_rank(server_state["clip_models"][model_name], image_features, server_state["keywords"]))
|
||||
|
||||
#print (bests)
|
||||
#print (ranks)
|
||||
|
||||
for i in range(len(ranks)):
|
||||
confidence_sum = 0
|
||||
for ci in range(len(ranks[i])):
|
||||
@ -288,6 +304,7 @@ def interrogate(image, models):
|
||||
|
||||
flaves = ', '.join([f"{x[0]}" for x in bests[4]])
|
||||
medium = bests[0][0][0]
|
||||
|
||||
if caption.startswith(medium):
|
||||
st.session_state["text_result"][st.session_state["processed_image_count"]].code(
|
||||
f"\n\n{caption} {bests[1][0][0]}, {bests[2][0][0]}, {bests[3][0][0]}, {flaves}", language="")
|
||||
@ -296,8 +313,11 @@ def interrogate(image, models):
|
||||
f"\n\n{caption}, {medium} {bests[1][0][0]}, {bests[2][0][0]}, {bests[3][0][0]}, {flaves}", language="")
|
||||
|
||||
#
|
||||
print("Finished Interrogating.")
|
||||
st.session_state["log_message"].code("Finished Interrogating.", language="")
|
||||
logger.info("Finished Interrogating.")
|
||||
st.session_state["log"].append("Finished Interrogating.")
|
||||
st.session_state["log_message"].code('\n'.join(st.session_state["log"]), language='')
|
||||
|
||||
st.session_state["log"] = []
|
||||
#
|
||||
|
||||
|
||||
@ -329,12 +349,12 @@ def img2txt():
|
||||
models.append('ViT-H-14')
|
||||
if st.session_state["ViT-g-14"]:
|
||||
models.append('ViT-g-14')
|
||||
|
||||
|
||||
if st.session_state["ViTB32"]:
|
||||
models.append('ViT-B/32')
|
||||
if st.session_state['ViTB16']:
|
||||
models.append('ViT-B/16')
|
||||
|
||||
models.append('ViT-B/16')
|
||||
|
||||
if st.session_state["ViTL14_336px"]:
|
||||
models.append('ViT-L/14@336px')
|
||||
if st.session_state["RN101"]:
|
||||
@ -346,7 +366,7 @@ def img2txt():
|
||||
if st.session_state["RN50x16"]:
|
||||
models.append('RN50x16')
|
||||
if st.session_state["RN50x64"]:
|
||||
models.append('RN50x64')
|
||||
models.append('RN50x64')
|
||||
|
||||
# if str(image_path_or_url).startswith('http://') or str(image_path_or_url).startswith('https://'):
|
||||
#image = Image.open(requests.get(image_path_or_url, stream=True).raw).convert('RGB')
|
||||
@ -389,11 +409,11 @@ def layout():
|
||||
st.session_state["ViT-H-14"] = st.checkbox("ViT-H-14", value=False, help="ViT-H-14 model.")
|
||||
st.session_state["ViT-g-14"] = st.checkbox("ViT-g-14", value=False, help="ViT-g-14 model.")
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
with st.expander("Others"):
|
||||
st.info("For DiscoDiffusion and JAX enable all the same models here as you intend to use when generating your images.")
|
||||
|
||||
st.info("For DiscoDiffusion and JAX enable all the same models here as you intend to use when generating your images.")
|
||||
|
||||
st.session_state["ViTL14_336px"] = st.checkbox("ViTL14_336px", value=False, help="ViTL14_336px model.")
|
||||
st.session_state["ViTB16"] = st.checkbox("ViTB16", value=False, help="ViTB16 model.")
|
||||
st.session_state["ViTB32"] = st.checkbox("ViTB32", value=False, help="ViTB32 model.")
|
||||
@ -401,8 +421,8 @@ def layout():
|
||||
st.session_state["RN50x4"] = st.checkbox("RN50x4", value=False, help="RN50x4 model.")
|
||||
st.session_state["RN50x16"] = st.checkbox("RN50x16", value=False, help="RN50x16 model.")
|
||||
st.session_state["RN50x64"] = st.checkbox("RN50x64", value=False, help="RN50x64 model.")
|
||||
st.session_state["RN101"] = st.checkbox("RN101", value=False, help="RN101 model.")
|
||||
|
||||
st.session_state["RN101"] = st.checkbox("RN101", value=False, help="RN101 model.")
|
||||
|
||||
#
|
||||
# st.subheader("Logs:")
|
||||
|
||||
|
@ -70,15 +70,19 @@ genfmt = "<level>{level: <10}</level> @ <green>{time:YYYY-MM-DD HH:mm:ss}</green
|
||||
initfmt = "<magenta>INIT </magenta> | <level>{extra[status]: <10}</level> | <magenta>{message}</magenta>"
|
||||
msgfmt = "<level>{level: <10}</level> | <level>{message}</level>"
|
||||
|
||||
logger.level("GENERATION", no=24, color="<cyan>")
|
||||
logger.level("PROMPT", no=23, color="<yellow>")
|
||||
logger.level("INIT", no=31, color="<white>")
|
||||
logger.level("INIT_OK", no=31, color="<green>")
|
||||
logger.level("INIT_WARN", no=31, color="<yellow>")
|
||||
logger.level("INIT_ERR", no=31, color="<red>")
|
||||
# Messages contain important information without which this application might not be able to be used
|
||||
# As such, they have the highest priority
|
||||
logger.level("MESSAGE", no=61, color="<green>")
|
||||
try:
|
||||
logger.level("GENERATION", no=24, color="<cyan>")
|
||||
logger.level("PROMPT", no=23, color="<yellow>")
|
||||
logger.level("INIT", no=31, color="<white>")
|
||||
logger.level("INIT_OK", no=31, color="<green>")
|
||||
logger.level("INIT_WARN", no=31, color="<yellow>")
|
||||
logger.level("INIT_ERR", no=31, color="<red>")
|
||||
# Messages contain important information without which this application might not be able to be used
|
||||
# As such, they have the highest priority
|
||||
logger.level("MESSAGE", no=61, color="<green>")
|
||||
except TypeError:
|
||||
pass
|
||||
|
||||
|
||||
logger.__class__.generation = partialmethod(logger.__class__.log, "GENERATION")
|
||||
logger.__class__.prompt = partialmethod(logger.__class__.log, "PROMPT")
|
||||
@ -97,3 +101,5 @@ config = {
|
||||
],
|
||||
}
|
||||
logger.configure(**config)
|
||||
|
||||
logger.add("logs/log_{time:MM-DD-YYYY!UTC}.log", rotation="8 MB", compression="zip", level='INFO') # Once the file is too old, it's rotated
|
||||
|
@ -14,6 +14,7 @@
|
||||
# You should have received a copy of the GNU Affero General Public License
|
||||
# along with this program. If not, see <http://www.gnu.org/licenses/>.
|
||||
# base webui import and utils.
|
||||
import collections.abc
|
||||
#from webui_streamlit import st
|
||||
import gfpgan
|
||||
import hydralit as st
|
||||
@ -21,10 +22,13 @@ import hydralit as st
|
||||
|
||||
# streamlit imports
|
||||
from streamlit import StopException, StreamlitAPIException
|
||||
from streamlit.runtime.scriptrunner import script_run_context
|
||||
|
||||
#streamlit components section
|
||||
from streamlit_server_state import server_state, server_state_lock
|
||||
import hydralit_components as hc
|
||||
from hydralit import HydraHeadApp
|
||||
import streamlit_nested_layout
|
||||
|
||||
#other imports
|
||||
|
||||
@ -64,7 +68,13 @@ import piexif.helper
|
||||
from tqdm import trange
|
||||
from ldm.models.diffusion.ddim import DDIMSampler
|
||||
from ldm.util import ismap
|
||||
|
||||
from abc import ABC, abstractmethod
|
||||
from typing import Dict, Union
|
||||
from io import BytesIO
|
||||
from packaging import version
|
||||
#import librosa
|
||||
from logger import logger, set_logger_verbosity, quiesce_logger
|
||||
#from loguru import logger
|
||||
|
||||
# Temp imports
|
||||
#from basicsr.utils.registry import ARCH_REGISTRY
|
||||
@ -73,6 +83,14 @@ from ldm.util import ismap
|
||||
# end of imports
|
||||
#---------------------------------------------------------------------------------------------------------------
|
||||
|
||||
# we make a log file where we store the logs
|
||||
logger.add("logs/log_{time:MM-DD-YYYY!UTC}.log", rotation="8 MB", compression="zip", level='INFO') # Once the file is too old, it's rotated
|
||||
logger.add(sys.stderr, diagnose=True)
|
||||
logger.add(sys.stdout)
|
||||
logger.enable("")
|
||||
|
||||
#
|
||||
|
||||
try:
|
||||
# this silences the annoying "Some weights of the model checkpoint were not used when initializing..." message at start.
|
||||
from transformers import logging
|
||||
@ -93,34 +111,64 @@ mimetypes.add_type('application/javascript', '.js')
|
||||
opt_C = 4
|
||||
opt_f = 8
|
||||
|
||||
if not "defaults" in st.session_state:
|
||||
st.session_state["defaults"] = {}
|
||||
|
||||
st.session_state["defaults"] = OmegaConf.load("configs/webui/webui_streamlit.yaml")
|
||||
def load_configs():
|
||||
if not "defaults" in st.session_state:
|
||||
st.session_state["defaults"] = {}
|
||||
|
||||
if (os.path.exists("configs/webui/userconfig_streamlit.yaml")):
|
||||
user_defaults = OmegaConf.load("configs/webui/userconfig_streamlit.yaml")
|
||||
try:
|
||||
st.session_state["defaults"] = OmegaConf.merge(st.session_state["defaults"], user_defaults)
|
||||
except KeyError:
|
||||
st.experimental_rerun()
|
||||
else:
|
||||
OmegaConf.save(config=st.session_state.defaults, f="configs/webui/userconfig_streamlit.yaml")
|
||||
loaded = OmegaConf.load("configs/webui/userconfig_streamlit.yaml")
|
||||
assert st.session_state.defaults == loaded
|
||||
st.session_state["defaults"] = OmegaConf.load("configs/webui/webui_streamlit.yaml")
|
||||
|
||||
if (os.path.exists(".streamlit/config.toml")):
|
||||
st.session_state["streamlit_config"] = toml.load(".streamlit/config.toml")
|
||||
if (os.path.exists("configs/webui/userconfig_streamlit.yaml")):
|
||||
user_defaults = OmegaConf.load("configs/webui/userconfig_streamlit.yaml")
|
||||
|
||||
if st.session_state["defaults"].daisi_app.running_on_daisi_io:
|
||||
if os.path.exists("scripts/modeldownload.py"):
|
||||
import modeldownload
|
||||
modeldownload.updateModels()
|
||||
if "version" in user_defaults.general:
|
||||
if version.parse(user_defaults.general.version) < version.parse(st.session_state["defaults"].general.version):
|
||||
logger.error("The version of the user config file is older than the version on the defaults config file. "
|
||||
"This means there were big changes we made on the config."
|
||||
"We are removing this file and recreating it from the defaults in order to make sure things work properly.")
|
||||
os.remove("configs/webui/userconfig_streamlit.yaml")
|
||||
st.experimental_rerun()
|
||||
else:
|
||||
logger.error("The version of the user config file is older than the version on the defaults config file. "
|
||||
"This means there were big changes we made on the config."
|
||||
"We are removing this file and recreating it from the defaults in order to make sure things work properly.")
|
||||
os.remove("configs/webui/userconfig_streamlit.yaml")
|
||||
st.experimental_rerun()
|
||||
|
||||
try:
|
||||
st.session_state["defaults"] = OmegaConf.merge(st.session_state["defaults"], user_defaults)
|
||||
except KeyError:
|
||||
st.experimental_rerun()
|
||||
else:
|
||||
OmegaConf.save(config=st.session_state.defaults, f="configs/webui/userconfig_streamlit.yaml")
|
||||
loaded = OmegaConf.load("configs/webui/userconfig_streamlit.yaml")
|
||||
assert st.session_state.defaults == loaded
|
||||
|
||||
if (os.path.exists(".streamlit/config.toml")):
|
||||
st.session_state["streamlit_config"] = toml.load(".streamlit/config.toml")
|
||||
|
||||
if st.session_state["defaults"].daisi_app.running_on_daisi_io:
|
||||
if os.path.exists("scripts/modeldownload.py"):
|
||||
import modeldownload
|
||||
modeldownload.updateModels()
|
||||
|
||||
if "keep_all_models_loaded" in st.session_state.defaults.general:
|
||||
with server_state_lock["keep_all_models_loaded"]:
|
||||
server_state["keep_all_models_loaded"] = st.session_state["defaults"].general.keep_all_models_loaded
|
||||
else:
|
||||
st.session_state["defaults"].general.keep_all_models_loaded = False
|
||||
with server_state_lock["keep_all_models_loaded"]:
|
||||
server_state["keep_all_models_loaded"] = st.session_state["defaults"].general.keep_all_models_loaded
|
||||
|
||||
load_configs()
|
||||
|
||||
#
|
||||
#app = st.HydraApp(title='Stable Diffusion WebUI', favicon="", sidebar_state="expanded",
|
||||
#hide_streamlit_markers=False, allow_url_nav=True , clear_cross_app_sessions=False)
|
||||
|
||||
#if st.session_state["defaults"].debug.enable_hydralit:
|
||||
#navbar_theme = {'txc_inactive': '#FFFFFF','menu_background':'#0e1117','txc_active':'black','option_active':'red'}
|
||||
#app = st.HydraApp(title='Stable Diffusion WebUI', favicon="", use_cookie_cache=False, sidebar_state="expanded", layout="wide", navbar_theme=navbar_theme,
|
||||
#hide_streamlit_markers=False, allow_url_nav=True , clear_cross_app_sessions=False, use_loader=False)
|
||||
#else:
|
||||
#app = None
|
||||
|
||||
# should and will be moved to a settings menu in the UI at some point
|
||||
grid_format = [s.lower() for s in st.session_state["defaults"].general.grid_format.split(':')]
|
||||
@ -165,8 +213,6 @@ os.environ["CUDA_DEVICE_ORDER"]="PCI_BUS_ID" # see issue #152
|
||||
os.environ["CUDA_VISIBLE_DEVICES"] = str(st.session_state["defaults"].general.gpu)
|
||||
|
||||
|
||||
#
|
||||
|
||||
# functions to load css locally OR remotely starts here. Options exist for future flexibility. Called as st.markdown with unsafe_allow_html as css injection
|
||||
# TODO, maybe look into async loading the file especially for remote fetching
|
||||
def local_css(file_name):
|
||||
@ -221,27 +267,19 @@ def human_readable_size(size, decimal_places=3):
|
||||
|
||||
|
||||
def load_models(use_LDSR = False, LDSR_model='model', use_GFPGAN=False, GFPGAN_model='GFPGANv1.4', use_RealESRGAN=False, RealESRGAN_model="RealESRGAN_x4plus",
|
||||
CustomModel_available=False, custom_model="Stable Diffusion v1.4"):
|
||||
CustomModel_available=False, custom_model="Stable Diffusion v1.5"):
|
||||
"""Load the different models. We also reuse the models that are already in memory to speed things up instead of loading them again. """
|
||||
|
||||
print ("Loading models.")
|
||||
logger.info("Loading models.")
|
||||
|
||||
if "progress_bar_text" in st.session_state:
|
||||
st.session_state["progress_bar_text"].text("")
|
||||
|
||||
|
||||
# Generate random run ID
|
||||
# Used to link runs linked w/ continue_prev_run which is not yet implemented
|
||||
# Use URL and filesystem safe version just in case.
|
||||
st.session_state["run_id"] = base64.urlsafe_b64encode(
|
||||
os.urandom(6)
|
||||
).decode("ascii")
|
||||
|
||||
# check what models we want to use and if the they are already loaded.
|
||||
with server_state_lock["LDSR"]:
|
||||
if use_LDSR:
|
||||
if "LDSR" in server_state and server_state["LDSR"].name == LDSR_model:
|
||||
print("LDSR already loaded")
|
||||
logger.info("LDSR already loaded")
|
||||
else:
|
||||
if "LDSR" in server_state:
|
||||
del server_state["LDSR"]
|
||||
@ -250,19 +288,21 @@ def load_models(use_LDSR = False, LDSR_model='model', use_GFPGAN=False, GFPGAN_m
|
||||
if os.path.exists(st.session_state["defaults"].general.LDSR_dir):
|
||||
try:
|
||||
server_state["LDSR"] = load_LDSR(model_name=LDSR_model)
|
||||
print(f"Loaded LDSR")
|
||||
logger.info(f"Loaded LDSR")
|
||||
except Exception:
|
||||
import traceback
|
||||
print(f"Error loading LDSR:", file=sys.stderr)
|
||||
print(traceback.format_exc(), file=sys.stderr)
|
||||
logger.error(f"Error loading LDSR:", file=sys.stderr)
|
||||
logger.error(traceback.format_exc(), file=sys.stderr)
|
||||
else:
|
||||
if "LDSR" in server_state:
|
||||
if "LDSR" in server_state and not server_state["keep_all_models_loaded"]:
|
||||
logger.debug("LDSR was in memory but we won't use it. Removing to save VRAM.")
|
||||
del server_state["LDSR"]
|
||||
|
||||
|
||||
with server_state_lock["GFPGAN"]:
|
||||
if use_GFPGAN:
|
||||
if "GFPGAN" in server_state and server_state["GFPGAN"].name == GFPGAN_model:
|
||||
print("GFPGAN already loaded")
|
||||
logger.info("GFPGAN already loaded")
|
||||
else:
|
||||
if "GFPGAN" in server_state:
|
||||
del server_state["GFPGAN"]
|
||||
@ -271,43 +311,69 @@ def load_models(use_LDSR = False, LDSR_model='model', use_GFPGAN=False, GFPGAN_m
|
||||
if os.path.exists(st.session_state["defaults"].general.GFPGAN_dir):
|
||||
try:
|
||||
server_state["GFPGAN"] = load_GFPGAN(GFPGAN_model)
|
||||
print(f"Loaded GFPGAN: {GFPGAN_model}")
|
||||
logger.info(f"Loaded GFPGAN: {GFPGAN_model}")
|
||||
except Exception:
|
||||
import traceback
|
||||
print(f"Error loading GFPGAN:", file=sys.stderr)
|
||||
print(traceback.format_exc(), file=sys.stderr)
|
||||
logger.error(f"Error loading GFPGAN:", file=sys.stderr)
|
||||
logger.error(traceback.format_exc(), file=sys.stderr)
|
||||
else:
|
||||
if "GFPGAN" in server_state:
|
||||
if "GFPGAN" in server_state and not server_state["keep_all_models_loaded"]:
|
||||
del server_state["GFPGAN"]
|
||||
|
||||
with server_state_lock["RealESRGAN"]:
|
||||
if use_RealESRGAN:
|
||||
if "RealESRGAN" in server_state and server_state["RealESRGAN"].model.name == RealESRGAN_model:
|
||||
print("RealESRGAN already loaded")
|
||||
logger.info("RealESRGAN already loaded")
|
||||
else:
|
||||
#Load RealESRGAN
|
||||
try:
|
||||
# We first remove the variable in case it has something there,
|
||||
# some errors can load the model incorrectly and leave things in memory.
|
||||
del server_state["RealESRGAN"]
|
||||
except KeyError:
|
||||
except KeyError as e:
|
||||
logger.error(e)
|
||||
pass
|
||||
|
||||
if os.path.exists(st.session_state["defaults"].general.RealESRGAN_dir):
|
||||
# st.session_state is used for keeping the models in memory across multiple pages or runs.
|
||||
server_state["RealESRGAN"] = load_RealESRGAN(RealESRGAN_model)
|
||||
print("Loaded RealESRGAN with model "+ server_state["RealESRGAN"].model.name)
|
||||
logger.info("Loaded RealESRGAN with model "+ server_state["RealESRGAN"].model.name)
|
||||
|
||||
else:
|
||||
if "RealESRGAN" in server_state:
|
||||
if "RealESRGAN" in server_state and not server_state["keep_all_models_loaded"]:
|
||||
del server_state["RealESRGAN"]
|
||||
|
||||
with server_state_lock["model"], server_state_lock["modelCS"], server_state_lock["modelFS"], server_state_lock["loaded_model"]:
|
||||
|
||||
if "model" in server_state:
|
||||
if "model" in server_state and server_state["loaded_model"] == custom_model:
|
||||
# TODO: check if the optimized mode was changed?
|
||||
print("Model already loaded")
|
||||
# if the float16 or no_half options have changed since the last time the model was loaded then we need to reload the model.
|
||||
if ("float16" in server_state and server_state['float16'] != st.session_state['defaults'].general.use_float16) \
|
||||
or ("no_half" in server_state and server_state['no_half'] != st.session_state['defaults'].general.no_half) \
|
||||
or ("optimized" in server_state and server_state['optimized'] != st.session_state['defaults'].general.optimized):
|
||||
|
||||
logger.info("Model options changed, deleting the model from memory.")
|
||||
|
||||
del server_state['float16']
|
||||
del server_state['no_half']
|
||||
|
||||
del server_state["model"]
|
||||
del server_state["modelCS"]
|
||||
del server_state["modelFS"]
|
||||
del server_state["loaded_model"]
|
||||
|
||||
del server_state['optimized']
|
||||
|
||||
server_state['float16'] = st.session_state['defaults'].general.use_float16
|
||||
server_state['no_half'] = st.session_state['defaults'].general.no_half
|
||||
server_state['optimized'] = st.session_state['defaults'].general.optimized
|
||||
|
||||
load_models(use_LDSR=st.session_state["use_LDSR"], LDSR_model=st.session_state["LDSR_model"],
|
||||
use_GFPGAN=st.session_state["use_GFPGAN"], GFPGAN_model=st.session_state["GFPGAN_model"] ,
|
||||
use_RealESRGAN=st.session_state["use_RealESRGAN"], RealESRGAN_model=st.session_state["RealESRGAN_model"],
|
||||
CustomModel_available=server_state["CustomModel_available"], custom_model=st.session_state["custom_model"])
|
||||
else:
|
||||
logger.info("Model already loaded")
|
||||
|
||||
return
|
||||
else:
|
||||
@ -317,19 +383,20 @@ def load_models(use_LDSR = False, LDSR_model='model', use_GFPGAN=False, GFPGAN_m
|
||||
del server_state["modelFS"]
|
||||
del server_state["loaded_model"]
|
||||
|
||||
except KeyError:
|
||||
except KeyError as e:
|
||||
logger.error(e)
|
||||
pass
|
||||
|
||||
# if the model from txt2vid is in memory we need to remove it to improve performance.
|
||||
with server_state_lock["pipe"]:
|
||||
if "pipe" in server_state:
|
||||
if "pipe" in server_state and not server_state["keep_all_models_loaded"]:
|
||||
del server_state["pipe"]
|
||||
|
||||
if "textual_inversion" in st.session_state:
|
||||
if "textual_inversion" in st.session_state and not server_state["keep_all_models_loaded"]:
|
||||
del st.session_state['textual_inversion']
|
||||
|
||||
# At this point the model is either
|
||||
# not loaded yet or have been evicted:
|
||||
# not loaded yet or have been deleted from memory:
|
||||
# load new model into memory
|
||||
server_state["custom_model"] = custom_model
|
||||
|
||||
@ -342,12 +409,17 @@ def load_models(use_LDSR = False, LDSR_model='model', use_GFPGAN=False, GFPGAN_m
|
||||
server_state["modelFS"] = modelFS
|
||||
server_state["loaded_model"] = custom_model
|
||||
|
||||
server_state['float16'] = st.session_state['defaults'].general.use_float16
|
||||
server_state['no_half'] = st.session_state['defaults'].general.no_half
|
||||
server_state['optimized'] = st.session_state['defaults'].general.optimized
|
||||
|
||||
#trying to disable multiprocessing as it makes it so streamlit cant stop when the
|
||||
# model is loaded in memory and you need to kill the process sometimes.
|
||||
|
||||
try:
|
||||
server_state["model"].args.use_multiprocessing_for_evaluation = False
|
||||
except AttributeError:
|
||||
except AttributeError as e:
|
||||
logger.error(e)
|
||||
pass
|
||||
|
||||
if st.session_state.defaults.general.enable_attention_slicing:
|
||||
@ -356,38 +428,49 @@ def load_models(use_LDSR = False, LDSR_model='model', use_GFPGAN=False, GFPGAN_m
|
||||
if st.session_state.defaults.general.enable_minimal_memory_usage:
|
||||
server_state["model"].enable_minimal_memory_usage()
|
||||
|
||||
print("Model loaded.")
|
||||
logger.info("Model loaded.")
|
||||
|
||||
return True
|
||||
|
||||
|
||||
def load_model_from_config(config, ckpt, verbose=False):
|
||||
|
||||
print(f"Loading model from {ckpt}")
|
||||
logger.info(f"Loading model from {ckpt}")
|
||||
|
||||
pl_sd = torch.load(ckpt, map_location="cpu")
|
||||
if "global_step" in pl_sd:
|
||||
print(f"Global Step: {pl_sd['global_step']}")
|
||||
sd = pl_sd["state_dict"]
|
||||
model = instantiate_from_config(config.model)
|
||||
m, u = model.load_state_dict(sd, strict=False)
|
||||
if len(m) > 0 and verbose:
|
||||
print("missing keys:")
|
||||
print(m)
|
||||
if len(u) > 0 and verbose:
|
||||
print("unexpected keys:")
|
||||
print(u)
|
||||
try:
|
||||
pl_sd = torch.load(ckpt, map_location="cpu")
|
||||
if "global_step" in pl_sd:
|
||||
logger.info(f"Global Step: {pl_sd['global_step']}")
|
||||
sd = pl_sd["state_dict"]
|
||||
model = instantiate_from_config(config.model)
|
||||
m, u = model.load_state_dict(sd, strict=False)
|
||||
if len(m) > 0 and verbose:
|
||||
logger.info("missing keys:")
|
||||
logger.info(m)
|
||||
if len(u) > 0 and verbose:
|
||||
logger.info("unexpected keys:")
|
||||
logger.info(u)
|
||||
|
||||
model.cuda()
|
||||
model.eval()
|
||||
|
||||
return model
|
||||
|
||||
except FileNotFoundError:
|
||||
if "progress_bar_text" in st.session_state:
|
||||
st.session_state["progress_bar_text"].error(
|
||||
"You need to download the Stable Diffusion model in order to use the UI. Use the Model Manager page in order to download the model."
|
||||
)
|
||||
|
||||
raise FileNotFoundError("You need to download the Stable Diffusion model in order to use the UI. Use the Model Manager page in order to download the model.")
|
||||
|
||||
model.cuda()
|
||||
model.eval()
|
||||
return model
|
||||
|
||||
|
||||
def load_sd_from_config(ckpt, verbose=False):
|
||||
print(f"Loading model from {ckpt}")
|
||||
logger.info(f"Loading model from {ckpt}")
|
||||
pl_sd = torch.load(ckpt, map_location="cpu")
|
||||
if "global_step" in pl_sd:
|
||||
print(f"Global Step: {pl_sd['global_step']}")
|
||||
logger.info(f"Global Step: {pl_sd['global_step']}")
|
||||
sd = pl_sd["state_dict"]
|
||||
return sd
|
||||
|
||||
@ -405,9 +488,9 @@ class MemUsageMonitor(threading.Thread):
|
||||
try:
|
||||
pynvml.nvmlInit()
|
||||
except:
|
||||
print(f"[{self.name}] Unable to initialize NVIDIA management. No memory stats. \n")
|
||||
logger.debug(f"[{self.name}] Unable to initialize NVIDIA management. No memory stats. \n")
|
||||
return
|
||||
print(f"[{self.name}] Recording memory usage...\n")
|
||||
logger.info(f"[{self.name}] Recording memory usage...\n")
|
||||
# Missing context
|
||||
#handle = pynvml.nvmlDeviceGetHandleByIndex(st.session_state['defaults'].general.gpu)
|
||||
handle = pynvml.nvmlDeviceGetHandleByIndex(0)
|
||||
@ -415,9 +498,9 @@ class MemUsageMonitor(threading.Thread):
|
||||
while not self.stop_flag:
|
||||
m = pynvml.nvmlDeviceGetMemoryInfo(handle)
|
||||
self.max_usage = max(self.max_usage, m.used)
|
||||
# print(self.max_usage)
|
||||
# logger.info(self.max_usage)
|
||||
time.sleep(0.1)
|
||||
print(f"[{self.name}] Stopped recording.\n")
|
||||
logger.info(f"[{self.name}] Stopped recording.\n")
|
||||
pynvml.nvmlShutdown()
|
||||
|
||||
def read(self):
|
||||
@ -644,7 +727,7 @@ def find_noise_for_image(model, device, init_image, prompt, steps=200, cond_scal
|
||||
sigmas = dnw.get_sigmas(steps).flip(0)
|
||||
|
||||
if verbose:
|
||||
print(sigmas)
|
||||
logger.info(sigmas)
|
||||
|
||||
for i in trange(1, len(sigmas)):
|
||||
x_in = torch.cat([x] * 2)
|
||||
@ -940,6 +1023,7 @@ class LDSR():
|
||||
log["sample_noquant"] = x_sample_noquant
|
||||
log["sample_diff"] = torch.abs(x_sample_noquant - x_sample)
|
||||
except:
|
||||
logger.error("Error with LDSR")
|
||||
pass
|
||||
|
||||
log["sample"] = x_sample
|
||||
@ -955,7 +1039,7 @@ class LDSR():
|
||||
ddim = DDIMSampler(model)
|
||||
bs = shape[0] # dont know where this comes from but wayne
|
||||
shape = shape[1:] # cut batch dim
|
||||
print(f"Sampling with eta = {eta}; steps: {steps}")
|
||||
logger.info(f"Sampling with eta = {eta}; steps: {steps}")
|
||||
samples, intermediates = ddim.sample(steps, batch_size=bs, shape=shape, conditioning=cond, callback=callback,
|
||||
normals_sequence=normals_sequence, quantize_x0=quantize_x0, eta=eta,
|
||||
mask=mask, x0=x0, temperature=temperature, verbose=False,
|
||||
@ -1099,7 +1183,7 @@ class LDSR():
|
||||
width_downsampled_pre = width_og//downsample_rate
|
||||
height_downsampled_pre = height_og//downsample_rate
|
||||
if downsample_rate != 1:
|
||||
print(f'Downsampling from [{width_og}, {height_og}] to [{width_downsampled_pre}, {height_downsampled_pre}]')
|
||||
logger.info(f'Downsampling from [{width_og}, {height_og}] to [{width_downsampled_pre}, {height_downsampled_pre}]')
|
||||
im_og = im_og.resize((width_downsampled_pre, height_downsampled_pre), Image.LANCZOS)
|
||||
|
||||
logs = self.run(model["model"], im_og, diffMode, diffusion_steps, eta)
|
||||
@ -1126,17 +1210,17 @@ class LDSR():
|
||||
aliasing = Image.NEAREST
|
||||
|
||||
if downsample_rate != 1:
|
||||
print(f'Downsampling from [{width}, {height}] to [{width_downsampled_post}, {height_downsampled_post}]')
|
||||
logger.info(f'Downsampling from [{width}, {height}] to [{width_downsampled_post}, {height_downsampled_post}]')
|
||||
a = a.resize((width_downsampled_post, height_downsampled_post), aliasing)
|
||||
elif post_downsample == 'Original Size':
|
||||
print(f'Downsampling from [{width}, {height}] to Original Size [{width_og}, {height_og}]')
|
||||
logger.info(f'Downsampling from [{width}, {height}] to Original Size [{width_og}, {height_og}]')
|
||||
a = a.resize((width_og, height_og), aliasing)
|
||||
|
||||
del model
|
||||
gc.collect()
|
||||
torch.cuda.empty_cache()
|
||||
|
||||
print(f'Processing finished!')
|
||||
logger.info(f'Processing finished!')
|
||||
return a
|
||||
|
||||
|
||||
@ -1385,7 +1469,7 @@ def ModelLoader(models,load=False,unload=False,imgproc_realesrgan_model_name='Re
|
||||
del global_vars[m+'CS']
|
||||
if m == 'model':
|
||||
m = 'Stable Diffusion'
|
||||
print('Unloaded ' + m)
|
||||
logger.info('Unloaded ' + m)
|
||||
if load:
|
||||
for m in models:
|
||||
if m not in global_vars or m in global_vars and type(global_vars[m]) == bool:
|
||||
@ -1404,7 +1488,7 @@ def ModelLoader(models,load=False,unload=False,imgproc_realesrgan_model_name='Re
|
||||
global_vars[m] = load_LDSR()
|
||||
if m =='model':
|
||||
m='Stable Diffusion'
|
||||
print('Loaded ' + m)
|
||||
logger.info('Loaded ' + m)
|
||||
torch_gc()
|
||||
|
||||
|
||||
@ -1417,7 +1501,8 @@ def generation_callback(img, i=0):
|
||||
try:
|
||||
if i == 0:
|
||||
if img['i']: i = img['i']
|
||||
except TypeError:
|
||||
except TypeError as e:
|
||||
logger.error(e)
|
||||
pass
|
||||
|
||||
if st.session_state.update_preview and\
|
||||
@ -1448,28 +1533,40 @@ def generation_callback(img, i=0):
|
||||
|
||||
|
||||
# update image on the UI so we can see the progress
|
||||
st.session_state["preview_image"].image(pil_image)
|
||||
if "preview_image" in st.session_state:
|
||||
st.session_state["preview_image"].image(pil_image)
|
||||
|
||||
# Show a progress bar so we can keep track of the progress even when the image progress is not been shown,
|
||||
# Dont worry, it doesnt affect the performance.
|
||||
if st.session_state["generation_mode"] == "txt2img":
|
||||
percent = int(100 * float(i+1 if i+1 < st.session_state.sampling_steps else st.session_state.sampling_steps)/float(st.session_state.sampling_steps))
|
||||
st.session_state["progress_bar_text"].text(
|
||||
f"Running step: {i+1 if i+1 < st.session_state.sampling_steps else st.session_state.sampling_steps}/{st.session_state.sampling_steps} {percent if percent < 100 else 100}%")
|
||||
|
||||
if "progress_bar_text" in st.session_state:
|
||||
st.session_state["progress_bar_text"].text(
|
||||
f"Running step: {i+1 if i+1 < st.session_state.sampling_steps else st.session_state.sampling_steps}/{st.session_state.sampling_steps} {percent if percent < 100 else 100}%")
|
||||
else:
|
||||
if st.session_state["generation_mode"] == "img2img":
|
||||
round_sampling_steps = round(st.session_state.sampling_steps * st.session_state["denoising_strength"])
|
||||
percent = int(100 * float(i+1 if i+1 < round_sampling_steps else round_sampling_steps)/float(round_sampling_steps))
|
||||
st.session_state["progress_bar_text"].text(
|
||||
f"""Running step: {i+1 if i+1 < round_sampling_steps else round_sampling_steps}/{round_sampling_steps} {percent if percent < 100 else 100}%""")
|
||||
|
||||
if "progress_bar_text" in st.session_state:
|
||||
st.session_state["progress_bar_text"].text(
|
||||
f"""Running step: {i+1 if i+1 < round_sampling_steps else round_sampling_steps}/{round_sampling_steps} {percent if percent < 100 else 100}%""")
|
||||
else:
|
||||
if st.session_state["generation_mode"] == "txt2vid":
|
||||
percent = int(100 * float(i+1 if i+1 < st.session_state.sampling_steps else st.session_state.sampling_steps)/float(st.session_state.sampling_steps))
|
||||
st.session_state["progress_bar_text"].text(
|
||||
f"Running step: {i+1 if i+1 < st.session_state.sampling_steps else st.session_state.sampling_steps}/{st.session_state.sampling_steps}"
|
||||
f"{percent if percent < 100 else 100}%")
|
||||
|
||||
st.session_state["progress_bar"].progress(percent if percent < 100 else 100)
|
||||
if "progress_bar_text" in st.session_state:
|
||||
st.session_state["progress_bar_text"].text(
|
||||
f"Running step: {i+1 if i+1 < st.session_state.sampling_steps else st.session_state.sampling_steps}/{st.session_state.sampling_steps}"
|
||||
f"{percent if percent < 100 else 100}%")
|
||||
|
||||
if "progress_bar" in st.session_state:
|
||||
try:
|
||||
st.session_state["progress_bar"].progress(percent if percent < 100 else 100)
|
||||
except UnboundLocalError as e:
|
||||
#logger.error(e)
|
||||
pass
|
||||
|
||||
|
||||
prompt_parser = re.compile("""
|
||||
@ -1614,15 +1711,20 @@ def image_grid(imgs, batch_size, force_n_rows=None, captions=None):
|
||||
w, h = imgs[0].size
|
||||
grid = Image.new('RGB', size=(cols * w, rows * h), color='black')
|
||||
|
||||
fnt = get_font(30)
|
||||
try:
|
||||
fnt = get_font(30)
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
for i, img in enumerate(imgs):
|
||||
grid.paste(img, box=(i % cols * w, i // cols * h))
|
||||
if captions and i<len(captions):
|
||||
d = ImageDraw.Draw( grid )
|
||||
size = d.textbbox( (0,0), captions[i], font=fnt, stroke_width=2, align="center" )
|
||||
d.multiline_text((i % cols * w + w/2, i // cols * h + h - size[3]), captions[i], font=fnt, fill=(255,255,255), stroke_width=2, stroke_fill=(0,0,0), anchor="mm", align="center")
|
||||
|
||||
try:
|
||||
if captions and i<len(captions):
|
||||
d = ImageDraw.Draw( grid )
|
||||
size = d.textbbox( (0,0), captions[i], font=fnt, stroke_width=2, align="center" )
|
||||
d.multiline_text((i % cols * w + w/2, i // cols * h + h - size[3]), captions[i], font=fnt, fill=(255,255,255), stroke_width=2, stroke_fill=(0,0,0), anchor="mm", align="center")
|
||||
except Exception:
|
||||
pass
|
||||
return grid
|
||||
|
||||
def seed_to_int(s):
|
||||
@ -1755,7 +1857,7 @@ def custom_models_available():
|
||||
with server_state_lock["CustomModel_available"]:
|
||||
if len(server_state["custom_models"]) > 0:
|
||||
server_state["CustomModel_available"] = True
|
||||
server_state["custom_models"].append("Stable Diffusion v1.4")
|
||||
server_state["custom_models"].append("Stable Diffusion v1.5")
|
||||
else:
|
||||
server_state["CustomModel_available"] = False
|
||||
|
||||
@ -1870,7 +1972,7 @@ def save_sample(image, sample_path_i, filename, jpg_sample, prompts, seeds, widt
|
||||
target="txt2img" if init_img is None else "img2img",
|
||||
prompt=prompts[i], ddim_steps=steps, toggles=toggles, sampler_name=sampler_name,
|
||||
ddim_eta=ddim_eta, n_iter=n_iter, batch_size=batch_size, cfg_scale=cfg_scale,
|
||||
seed=seeds[i], width=width, height=height, normalize_prompt_weights=normalize_prompt_weights, model_name=server_state["loaded_model"])
|
||||
seed=seeds[i], width=width, height=height, normalize_prompt_weights=normalize_prompt_weights, model_name=model_name)
|
||||
# Not yet any use for these, but they bloat up the files:
|
||||
# info_dict["init_img"] = init_img
|
||||
# info_dict["init_mask"] = init_mask
|
||||
@ -2107,7 +2209,7 @@ def process_images(
|
||||
n_iter = math.ceil(len(all_prompts) / batch_size)
|
||||
all_seeds = len(all_prompts) * [seed]
|
||||
|
||||
print(f"Prompt matrix will create {len(all_prompts)} images using a total of {n_iter} batches.")
|
||||
logger.info(f"Prompt matrix will create {len(all_prompts)} images using a total of {n_iter} batches.")
|
||||
else:
|
||||
|
||||
if not st.session_state['defaults'].general.no_verify_input:
|
||||
@ -2115,8 +2217,8 @@ def process_images(
|
||||
check_prompt_length(prompt, comments)
|
||||
except:
|
||||
import traceback
|
||||
print("Error verifying input:", file=sys.stderr)
|
||||
print(traceback.format_exc(), file=sys.stderr)
|
||||
logger.info("Error verifying input:", file=sys.stderr)
|
||||
logger.info(traceback.format_exc(), file=sys.stderr)
|
||||
|
||||
all_prompts = batch_size * n_iter * [prompt]
|
||||
all_seeds = [seed + x for x in range(len(all_prompts))]
|
||||
@ -2143,12 +2245,12 @@ def process_images(
|
||||
all_seeds[si] += target_seed_randomizer
|
||||
|
||||
for n in range(n_iter):
|
||||
print(f"Iteration: {n+1}/{n_iter}")
|
||||
logger.info(f"Iteration: {n+1}/{n_iter}")
|
||||
prompts = all_prompts[n * batch_size:(n + 1) * batch_size]
|
||||
captions = prompt_matrix_parts[n * batch_size:(n + 1) * batch_size]
|
||||
seeds = all_seeds[n * batch_size:(n + 1) * batch_size]
|
||||
|
||||
print(prompt)
|
||||
logger.info(prompt)
|
||||
|
||||
if st.session_state['defaults'].general.optimized:
|
||||
server_state["modelCS"].to(st.session_state['defaults'].general.gpu)
|
||||
@ -2216,7 +2318,9 @@ def process_images(
|
||||
sanitized_prompt = slugify(prompts[i])
|
||||
|
||||
percent = i / len(x_samples_ddim)
|
||||
st.session_state["progress_bar"].progress(percent if percent < 100 else 100)
|
||||
|
||||
if "progress_bar" in st.session_state:
|
||||
st.session_state["progress_bar"].progress(percent if percent < 100 else 100)
|
||||
|
||||
if sort_samples:
|
||||
full_path = os.path.join(os.getcwd(), sample_path, sanitized_prompt)
|
||||
@ -2243,17 +2347,21 @@ def process_images(
|
||||
original_sample = x_sample
|
||||
original_filename = filename
|
||||
|
||||
st.session_state["preview_image"].image(image)
|
||||
if "preview_image" in st.session_state:
|
||||
st.session_state["preview_image"].image(image)
|
||||
|
||||
#
|
||||
if use_GFPGAN and server_state["GFPGAN"] is not None and not use_RealESRGAN and not use_LDSR:
|
||||
st.session_state["progress_bar_text"].text("Running GFPGAN on image %d of %d..." % (i+1, len(x_samples_ddim)))
|
||||
if "progress_bar_text" in st.session_state:
|
||||
st.session_state["progress_bar_text"].text("Running GFPGAN on image %d of %d..." % (i+1, len(x_samples_ddim)))
|
||||
|
||||
if server_state["GFPGAN"].name != GFPGAN_model:
|
||||
load_models(use_LDSR=use_LDSR, LDSR_model=LDSR_model_name, use_GFPGAN=use_GFPGAN, use_RealESRGAN=use_RealESRGAN, RealESRGAN_model=realesrgan_model_name)
|
||||
|
||||
torch_gc()
|
||||
cropped_faces, restored_faces, restored_img = server_state["GFPGAN"].enhance(x_sample[:,:,::-1], has_aligned=False, only_center_face=False, paste_back=True)
|
||||
|
||||
with torch.autocast('cuda'):
|
||||
cropped_faces, restored_faces, restored_img = server_state["GFPGAN"].enhance(x_sample[:,:,::-1], has_aligned=False, only_center_face=False, paste_back=True)
|
||||
|
||||
gfpgan_sample = restored_img[:,:,::-1]
|
||||
gfpgan_image = Image.fromarray(gfpgan_sample)
|
||||
@ -2276,7 +2384,8 @@ def process_images(
|
||||
|
||||
#
|
||||
elif use_RealESRGAN and server_state["RealESRGAN"] is not None and not use_GFPGAN:
|
||||
st.session_state["progress_bar_text"].text("Running RealESRGAN on image %d of %d..." % (i+1, len(x_samples_ddim)))
|
||||
if "progress_bar_text" in st.session_state:
|
||||
st.session_state["progress_bar_text"].text("Running RealESRGAN on image %d of %d..." % (i+1, len(x_samples_ddim)))
|
||||
#skip_save = True # #287 >_>
|
||||
torch_gc()
|
||||
|
||||
@ -2305,8 +2414,9 @@ def process_images(
|
||||
|
||||
#
|
||||
elif use_LDSR and "LDSR" in server_state and not use_GFPGAN:
|
||||
print ("Running LDSR on image %d of %d..." % (i+1, len(x_samples_ddim)))
|
||||
st.session_state["progress_bar_text"].text("Running LDSR on image %d of %d..." % (i+1, len(x_samples_ddim)))
|
||||
logger.info ("Running LDSR on image %d of %d..." % (i+1, len(x_samples_ddim)))
|
||||
if "progress_bar_text" in st.session_state:
|
||||
st.session_state["progress_bar_text"].text("Running LDSR on image %d of %d..." % (i+1, len(x_samples_ddim)))
|
||||
#skip_save = True # #287 >_>
|
||||
torch_gc()
|
||||
|
||||
@ -2338,8 +2448,9 @@ def process_images(
|
||||
|
||||
#
|
||||
elif use_LDSR and "LDSR" in server_state and use_GFPGAN and "GFPGAN" in server_state:
|
||||
print ("Running GFPGAN+LDSR on image %d of %d..." % (i+1, len(x_samples_ddim)))
|
||||
st.session_state["progress_bar_text"].text("Running GFPGAN+LDSR on image %d of %d..." % (i+1, len(x_samples_ddim)))
|
||||
logger.info ("Running GFPGAN+LDSR on image %d of %d..." % (i+1, len(x_samples_ddim)))
|
||||
if "progress_bar_text" in st.session_state:
|
||||
st.session_state["progress_bar_text"].text("Running GFPGAN+LDSR on image %d of %d..." % (i+1, len(x_samples_ddim)))
|
||||
|
||||
if server_state["GFPGAN"].name != GFPGAN_model:
|
||||
load_models(use_LDSR=use_LDSR, LDSR_model=LDSR_model_name, use_GFPGAN=use_GFPGAN, use_RealESRGAN=use_RealESRGAN, RealESRGAN_model=realesrgan_model_name)
|
||||
@ -2378,7 +2489,8 @@ def process_images(
|
||||
grid_captions.append( captions[i] + "\ngfpgan-ldsr" )
|
||||
|
||||
elif use_RealESRGAN and server_state["RealESRGAN"] is not None and use_GFPGAN and server_state["GFPGAN"] is not None:
|
||||
st.session_state["progress_bar_text"].text("Running GFPGAN+RealESRGAN on image %d of %d..." % (i+1, len(x_samples_ddim)))
|
||||
if "progress_bar_text" in st.session_state:
|
||||
st.session_state["progress_bar_text"].text("Running GFPGAN+RealESRGAN on image %d of %d..." % (i+1, len(x_samples_ddim)))
|
||||
#skip_save = True # #287 >_>
|
||||
torch_gc()
|
||||
cropped_faces, restored_faces, restored_img = server_state["GFPGAN"].enhance(x_sample[:,:,::-1], has_aligned=False, only_center_face=False, paste_back=True)
|
||||
@ -2455,8 +2567,11 @@ def process_images(
|
||||
# Constrain the final preview image to 1440x900 so we're not sending huge amounts of data
|
||||
# to the browser
|
||||
preview_image = constrain_image(preview_image, 1440, 900)
|
||||
st.session_state["progress_bar_text"].text("Finished!")
|
||||
st.session_state["preview_image"].image(preview_image)
|
||||
if "progress_bar_text" in st.session_state:
|
||||
st.session_state["progress_bar_text"].text("Finished!")
|
||||
|
||||
if "preview_image" in st.session_state:
|
||||
st.session_state["preview_image"].image(preview_image)
|
||||
|
||||
if prompt_matrix or save_grid:
|
||||
if prompt_matrix:
|
||||
@ -2468,8 +2583,8 @@ def process_images(
|
||||
grid = draw_prompt_matrix(grid, width, height, prompt_matrix_parts)
|
||||
except:
|
||||
import traceback
|
||||
print("Error creating prompt_matrix text:", file=sys.stderr)
|
||||
print(traceback.format_exc(), file=sys.stderr)
|
||||
logger.error("Error creating prompt_matrix text:", file=sys.stderr)
|
||||
logger.error(traceback.format_exc(), file=sys.stderr)
|
||||
else:
|
||||
grid = image_grid(output_images, batch_size)
|
||||
|
||||
@ -2554,4 +2669,151 @@ def convert_pt_to_bin_and_load(input_file, text_encoder, tokenizer, placeholder_
|
||||
}
|
||||
torch.save(params_dict, "learned_embeds.bin")
|
||||
load_learned_embed_in_clip("learned_embeds.bin", text_encoder, tokenizer, placeholder_token)
|
||||
print("loaded", placeholder_token)
|
||||
logger.info("loaded", placeholder_token)
|
||||
|
||||
@logger.catch(reraise=True)
|
||||
def run_bridge(interval, api_key, horde_name, horde_url, priority_usernames, horde_max_pixels, horde_nsfw, horde_censor_nsfw, horde_blacklist, horde_censorlist):
|
||||
current_id = None
|
||||
current_payload = None
|
||||
loop_retry = 0
|
||||
# load the model for stable horde if its not in memory already
|
||||
# we should load it after we get the request from the API in
|
||||
# case the model is different from the loaded in memory but
|
||||
# for now we can load it here so its read right away.
|
||||
load_models(use_GFPGAN=True)
|
||||
while True:
|
||||
|
||||
if loop_retry > 10 and current_id:
|
||||
logger.info(f"Exceeded retry count {loop_retry} for generation id {current_id}. Aborting generation!")
|
||||
current_id = None
|
||||
current_payload = None
|
||||
current_generation = None
|
||||
loop_retry = 0
|
||||
elif current_id:
|
||||
logger.info(f"Retrying ({loop_retry}/10) for generation id {current_id}...")
|
||||
gen_dict = {
|
||||
"name": horde_name,
|
||||
"max_pixels": horde_max_pixels,
|
||||
"priority_usernames": priority_usernames,
|
||||
"nsfw": horde_nsfw,
|
||||
"blacklist": horde_blacklist,
|
||||
"models": ["stable_diffusion"],
|
||||
}
|
||||
headers = {"apikey": api_key}
|
||||
if current_id:
|
||||
loop_retry += 1
|
||||
else:
|
||||
try:
|
||||
pop_req = requests.post(horde_url + '/api/v2/generate/pop', json = gen_dict, headers = headers)
|
||||
except requests.exceptions.ConnectionError:
|
||||
logger.warning(f"Server {horde_url} unavailable during pop. Waiting 10 seconds...")
|
||||
time.sleep(10)
|
||||
continue
|
||||
except requests.exceptions.JSONDecodeError():
|
||||
logger.warning(f"Server {horde_url} unavailable during pop. Waiting 10 seconds...")
|
||||
time.sleep(10)
|
||||
continue
|
||||
try:
|
||||
pop = pop_req.json()
|
||||
except json.decoder.JSONDecodeError:
|
||||
logger.warning(f"Could not decode response from {horde_url} as json. Please inform its administrator!")
|
||||
time.sleep(interval)
|
||||
continue
|
||||
if pop == None:
|
||||
logger.warning(f"Something has gone wrong with {horde_url}. Please inform its administrator!")
|
||||
time.sleep(interval)
|
||||
continue
|
||||
if not pop_req.ok:
|
||||
message = pop['message']
|
||||
logger.warning(f"During gen pop, server {horde_url} responded with status code {pop_req.status_code}: {pop['message']}. Waiting for 10 seconds...")
|
||||
if 'errors' in pop:
|
||||
logger.debug(f"Detailed Request Errors: {pop['errors']}")
|
||||
time.sleep(10)
|
||||
continue
|
||||
if not pop.get("id"):
|
||||
skipped_info = pop.get('skipped')
|
||||
if skipped_info and len(skipped_info):
|
||||
skipped_info = f" Skipped Info: {skipped_info}."
|
||||
else:
|
||||
skipped_info = ''
|
||||
logger.info(f"Server {horde_url} has no valid generations to do for us.{skipped_info}")
|
||||
time.sleep(interval)
|
||||
continue
|
||||
current_id = pop['id']
|
||||
logger.info(f"Request with id {current_id} picked up. Initiating work...")
|
||||
current_payload = pop['payload']
|
||||
if 'toggles' in current_payload and current_payload['toggles'] == None:
|
||||
logger.error(f"Received Bad payload: {pop}")
|
||||
current_id = None
|
||||
current_payload = None
|
||||
current_generation = None
|
||||
loop_retry = 0
|
||||
time.sleep(10)
|
||||
continue
|
||||
|
||||
logger.debug(current_payload)
|
||||
current_payload['toggles'] = current_payload.get('toggles', [1,4])
|
||||
# In bridge-mode, matrix is prepared on the horde and split in multiple nodes
|
||||
if 0 in current_payload['toggles']:
|
||||
current_payload['toggles'].remove(0)
|
||||
if 8 not in current_payload['toggles']:
|
||||
if horde_censor_nsfw and not horde_nsfw:
|
||||
current_payload['toggles'].append(8)
|
||||
elif any(word in current_payload['prompt'] for word in horde_censorlist):
|
||||
current_payload['toggles'].append(8)
|
||||
|
||||
from txt2img import txt2img
|
||||
|
||||
|
||||
"""{'prompt': 'Centred Husky, inside spiral with circular patterns, trending on dribbble, knotwork, spirals, key patterns,
|
||||
zoomorphics, ', 'ddim_steps': 30, 'n_iter': 1, 'sampler_name': 'DDIM', 'cfg_scale': 16.0, 'seed': '3405278433', 'height': 512, 'width': 512}"""
|
||||
|
||||
#images, seed, info, stats = txt2img(**current_payload)
|
||||
images, seed, info, stats = txt2img(str(current_payload['prompt']), int(current_payload['ddim_steps']), str(current_payload['sampler_name']),
|
||||
int(current_payload['n_iter']), 1, float(current_payload["cfg_scale"]), str(current_payload["seed"]),
|
||||
int(current_payload["height"]), int(current_payload["width"]), save_grid=False, group_by_prompt=False,
|
||||
save_individual_images=False,write_info_files=False)
|
||||
|
||||
buffer = BytesIO()
|
||||
# We send as WebP to avoid using all the horde bandwidth
|
||||
images[0].save(buffer, format="WebP", quality=90)
|
||||
# logger.info(info)
|
||||
submit_dict = {
|
||||
"id": current_id,
|
||||
"generation": base64.b64encode(buffer.getvalue()).decode("utf8"),
|
||||
"api_key": api_key,
|
||||
"seed": seed,
|
||||
"max_pixels": horde_max_pixels,
|
||||
}
|
||||
current_generation = seed
|
||||
while current_id and current_generation != None:
|
||||
try:
|
||||
submit_req = requests.post(horde_url + '/api/v2/generate/submit', json = submit_dict, headers = headers)
|
||||
try:
|
||||
submit = submit_req.json()
|
||||
except json.decoder.JSONDecodeError:
|
||||
logger.error(f"Something has gone wrong with {horde_url} during submit. Please inform its administrator! (Retry {loop_retry}/10)")
|
||||
time.sleep(interval)
|
||||
continue
|
||||
if submit_req.status_code == 404:
|
||||
logger.info(f"The generation we were working on got stale. Aborting!")
|
||||
elif not submit_req.ok:
|
||||
logger.error(f"During gen submit, server {horde_url} responded with status code {submit_req.status_code}: {submit['message']}. Waiting for 10 seconds... (Retry {loop_retry}/10)")
|
||||
if 'errors' in submit:
|
||||
logger.debug(f"Detailed Request Errors: {submit['errors']}")
|
||||
time.sleep(10)
|
||||
continue
|
||||
else:
|
||||
logger.info(f'Submitted generation with id {current_id} and contributed for {submit_req.json()["reward"]}')
|
||||
current_id = None
|
||||
current_payload = None
|
||||
current_generation = None
|
||||
loop_retry = 0
|
||||
except requests.exceptions.ConnectionError:
|
||||
logger.warning(f"Server {horde_url} unavailable during submit. Waiting 10 seconds... (Retry {loop_retry}/10)")
|
||||
time.sleep(10)
|
||||
continue
|
||||
time.sleep(interval)
|
||||
|
||||
|
||||
#
|
||||
|
@ -1,26 +1,166 @@
|
||||
import gc
|
||||
import inspect
|
||||
import warnings
|
||||
from typing import List, Optional, Union
|
||||
from typing import Callable, List, Optional, Union
|
||||
from pathlib import Path
|
||||
from torchvision.transforms.functional import pil_to_tensor
|
||||
import librosa
|
||||
from PIL import Image
|
||||
from torchvision.io import write_video
|
||||
import numpy as np
|
||||
import time
|
||||
import json
|
||||
|
||||
import torch
|
||||
|
||||
from transformers import CLIPFeatureExtractor, CLIPTextModel, CLIPTokenizer
|
||||
|
||||
from diffusers import ModelMixin
|
||||
from diffusers.configuration_utils import FrozenDict
|
||||
from diffusers.models import AutoencoderKL, UNet2DConditionModel
|
||||
from diffusers.pipeline_utils import DiffusionPipeline
|
||||
from diffusers.pipelines.stable_diffusion.safety_checker import StableDiffusionSafetyChecker
|
||||
from diffusers.utils import deprecate, logging
|
||||
from diffusers.schedulers import DDIMScheduler, LMSDiscreteScheduler, PNDMScheduler
|
||||
from diffusers import StableDiffusionPipelineOutput
|
||||
#from diffusers.safety_checker import StableDiffusionSafetyChecker
|
||||
from diffusers.pipelines.stable_diffusion import StableDiffusionPipelineOutput
|
||||
|
||||
from transformers import CLIPFeatureExtractor, CLIPTextModel, CLIPTokenizer
|
||||
from torch import nn
|
||||
|
||||
from .upsampling import RealESRGANModel
|
||||
|
||||
|
||||
class StableDiffusionPipeline(DiffusionPipeline):
|
||||
logger = logging.get_logger(__name__) # pylint: disable=invalid-name
|
||||
|
||||
|
||||
def get_spec_norm(wav, sr, n_mels=512, hop_length=704):
|
||||
"""Obtain maximum value for each time-frame in Mel Spectrogram,
|
||||
and normalize between 0 and 1
|
||||
|
||||
Borrowed from lucid sonic dreams repo. In there, they programatically determine hop length
|
||||
but I really didn't understand what was going on so I removed it and hard coded the output.
|
||||
"""
|
||||
|
||||
# Generate Mel Spectrogram
|
||||
spec_raw = librosa.feature.melspectrogram(y=wav, sr=sr, n_mels=n_mels, hop_length=hop_length)
|
||||
|
||||
# Obtain maximum value per time-frame
|
||||
spec_max = np.amax(spec_raw, axis=0)
|
||||
|
||||
# Normalize all values between 0 and 1
|
||||
spec_norm = (spec_max - np.min(spec_max)) / np.ptp(spec_max)
|
||||
|
||||
return spec_norm
|
||||
|
||||
|
||||
def get_timesteps_arr(audio_filepath, offset, duration, fps=30, margin=(1.0, 5.0)):
|
||||
"""Get the array that will be used to determine how much to interpolate between images.
|
||||
|
||||
Normally, this is just a linspace between 0 and 1 for the number of frames to generate. In this case,
|
||||
we want to use the amplitude of the audio to determine how much to interpolate between images.
|
||||
|
||||
So, here we:
|
||||
1. Load the audio file
|
||||
2. Split the audio into harmonic and percussive components
|
||||
3. Get the normalized amplitude of the percussive component, resized to the number of frames
|
||||
4. Get the cumulative sum of the amplitude array
|
||||
5. Normalize the cumulative sum between 0 and 1
|
||||
6. Return the array
|
||||
|
||||
I honestly have no clue what I'm doing here. Suggestions welcome.
|
||||
"""
|
||||
y, sr = librosa.load(audio_filepath, offset=offset, duration=duration)
|
||||
wav_harmonic, wav_percussive = librosa.effects.hpss(y, margin=margin)
|
||||
|
||||
# Apparently n_mels is supposed to be input shape but I don't think it matters here?
|
||||
frame_duration = int(sr / fps)
|
||||
wav_norm = get_spec_norm(wav_percussive, sr, n_mels=512, hop_length=frame_duration)
|
||||
amplitude_arr = np.resize(wav_norm, int(duration * fps))
|
||||
T = np.cumsum(amplitude_arr)
|
||||
T /= T[-1]
|
||||
T[0] = 0.0
|
||||
return T
|
||||
|
||||
|
||||
def slerp(t, v0, v1, DOT_THRESHOLD=0.9995):
|
||||
"""helper function to spherically interpolate two arrays v1 v2"""
|
||||
|
||||
if not isinstance(v0, np.ndarray):
|
||||
inputs_are_torch = True
|
||||
input_device = v0.device
|
||||
v0 = v0.cpu().numpy()
|
||||
v1 = v1.cpu().numpy()
|
||||
|
||||
dot = np.sum(v0 * v1 / (np.linalg.norm(v0) * np.linalg.norm(v1)))
|
||||
if np.abs(dot) > DOT_THRESHOLD:
|
||||
v2 = (1 - t) * v0 + t * v1
|
||||
else:
|
||||
theta_0 = np.arccos(dot)
|
||||
sin_theta_0 = np.sin(theta_0)
|
||||
theta_t = theta_0 * t
|
||||
sin_theta_t = np.sin(theta_t)
|
||||
s0 = np.sin(theta_0 - theta_t) / sin_theta_0
|
||||
s1 = sin_theta_t / sin_theta_0
|
||||
v2 = s0 * v0 + s1 * v1
|
||||
|
||||
if inputs_are_torch:
|
||||
v2 = torch.from_numpy(v2).to(input_device)
|
||||
|
||||
return v2
|
||||
|
||||
|
||||
def make_video_pyav(
|
||||
frames_or_frame_dir: Union[str, Path, torch.Tensor],
|
||||
audio_filepath: Union[str, Path] = None,
|
||||
fps: int = 30,
|
||||
audio_offset: int = 0,
|
||||
audio_duration: int = 2,
|
||||
sr: int = 22050,
|
||||
output_filepath: Union[str, Path] = "output.mp4",
|
||||
glob_pattern: str = "*.png",
|
||||
):
|
||||
"""
|
||||
TODO - docstring here
|
||||
|
||||
frames_or_frame_dir: (Union[str, Path, torch.Tensor]):
|
||||
Either a directory of images, or a tensor of shape (T, C, H, W) in range [0, 255].
|
||||
"""
|
||||
|
||||
# Torchvision write_video doesn't support pathlib paths
|
||||
output_filepath = str(output_filepath)
|
||||
|
||||
if isinstance(frames_or_frame_dir, (str, Path)):
|
||||
frames = None
|
||||
for img in sorted(Path(frames_or_frame_dir).glob(glob_pattern)):
|
||||
frame = pil_to_tensor(Image.open(img)).unsqueeze(0)
|
||||
frames = frame if frames is None else torch.cat([frames, frame])
|
||||
else:
|
||||
|
||||
frames = frames_or_frame_dir
|
||||
|
||||
# TCHW -> THWC
|
||||
frames = frames.permute(0, 2, 3, 1)
|
||||
|
||||
if audio_filepath:
|
||||
# Read audio, convert to tensor
|
||||
audio, sr = librosa.load(audio_filepath, sr=sr, mono=True, offset=audio_offset, duration=audio_duration)
|
||||
audio_tensor = torch.tensor(audio).unsqueeze(0)
|
||||
|
||||
write_video(
|
||||
output_filepath,
|
||||
frames,
|
||||
fps=fps,
|
||||
audio_array=audio_tensor,
|
||||
audio_fps=sr,
|
||||
audio_codec="aac",
|
||||
options={"crf": "10", "pix_fmt": "yuv420p"},
|
||||
)
|
||||
else:
|
||||
write_video(output_filepath, frames, fps=fps, options={"crf": "10", "pix_fmt": "yuv420p"})
|
||||
|
||||
return output_filepath
|
||||
|
||||
|
||||
class StableDiffusionWalkPipeline(DiffusionPipeline):
|
||||
r"""
|
||||
Pipeline for text-to-image generation using Stable Diffusion.
|
||||
|
||||
Pipeline for generating videos by interpolating Stable Diffusion's latent space.
|
||||
This model inherits from [`DiffusionPipeline`]. Check the superclass documentation for the generic methods the
|
||||
library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.)
|
||||
|
||||
Args:
|
||||
vae ([`AutoencoderKL`]):
|
||||
Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations.
|
||||
@ -35,6 +175,11 @@ class StableDiffusionPipeline(DiffusionPipeline):
|
||||
scheduler ([`SchedulerMixin`]):
|
||||
A scheduler to be used in combination with `unet` to denoise the encoded image latens. Can be one of
|
||||
[`DDIMScheduler`], [`LMSDiscreteScheduler`], or [`PNDMScheduler`].
|
||||
safety_checker ([`StableDiffusionSafetyChecker`]):
|
||||
Classification module that estimates whether generated images could be considered offensive or harmful.
|
||||
Please, refer to the [model card](https://huggingface.co/CompVis/stable-diffusion-v1-4) for details.
|
||||
feature_extractor ([`CLIPFeatureExtractor`]):
|
||||
Model that extracts features from generated images to be used as inputs for the `safety_checker`.
|
||||
"""
|
||||
|
||||
def __init__(
|
||||
@ -43,10 +188,26 @@ class StableDiffusionPipeline(DiffusionPipeline):
|
||||
text_encoder: CLIPTextModel,
|
||||
tokenizer: CLIPTokenizer,
|
||||
unet: UNet2DConditionModel,
|
||||
scheduler: Union[DDIMScheduler, PNDMScheduler, LMSDiscreteScheduler]
|
||||
scheduler: Union[DDIMScheduler, PNDMScheduler, LMSDiscreteScheduler],
|
||||
safety_checker: StableDiffusionSafetyChecker,
|
||||
feature_extractor: CLIPFeatureExtractor,
|
||||
):
|
||||
super().__init__()
|
||||
scheduler = scheduler.set_format("pt")
|
||||
|
||||
if hasattr(scheduler.config, "steps_offset") and scheduler.config.steps_offset != 1:
|
||||
deprecation_message = (
|
||||
f"The configuration file of this scheduler: {scheduler} is outdated. `steps_offset`"
|
||||
f" should be set to 1 instead of {scheduler.config.steps_offset}. Please make sure "
|
||||
"to update the config accordingly as leaving `steps_offset` might led to incorrect results"
|
||||
" in future versions. If you have downloaded this checkpoint from the Hugging Face Hub,"
|
||||
" it would be very nice if you could open a Pull request for the `scheduler/scheduler_config.json`"
|
||||
" file"
|
||||
)
|
||||
deprecate("steps_offset!=1", "1.0.0", deprecation_message, standard_warn=False)
|
||||
new_config = dict(scheduler.config)
|
||||
new_config["steps_offset"] = 1
|
||||
scheduler._internal_dict = FrozenDict(new_config)
|
||||
|
||||
self.register_modules(
|
||||
vae=vae,
|
||||
text_encoder=text_encoder,
|
||||
@ -60,10 +221,8 @@ class StableDiffusionPipeline(DiffusionPipeline):
|
||||
def enable_attention_slicing(self, slice_size: Optional[Union[str, int]] = "auto"):
|
||||
r"""
|
||||
Enable sliced attention computation.
|
||||
|
||||
When this option is enabled, the attention module will split the input tensor in slices, to compute attention
|
||||
in several steps. This is useful to save some memory in exchange for a small speed decrease.
|
||||
|
||||
Args:
|
||||
slice_size (`str` or `int`, *optional*, defaults to `"auto"`):
|
||||
When `"auto"`, halves the input to the attention heads, so attention will be computed in two steps. If
|
||||
@ -84,32 +243,25 @@ class StableDiffusionPipeline(DiffusionPipeline):
|
||||
# set slice_size = `None` to disable `attention slicing`
|
||||
self.enable_attention_slicing(None)
|
||||
|
||||
def enable_minimal_memory_usage(self):
|
||||
"""Moves only unet to fp16 and to CUDA, while keepping lighter models on CPUs"""
|
||||
self.unet.to(torch.float16).to(torch.device("cuda"))
|
||||
self.enable_attention_slicing(1)
|
||||
|
||||
torch.cuda.empty_cache()
|
||||
gc.collect()
|
||||
|
||||
@torch.no_grad()
|
||||
def __call__(
|
||||
self,
|
||||
prompt: Union[str, List[str]],
|
||||
height: Optional[int] = 512,
|
||||
width: Optional[int] = 512,
|
||||
num_inference_steps: Optional[int] = 50,
|
||||
guidance_scale: Optional[float] = 7.5,
|
||||
eta: Optional[float] = 0.0,
|
||||
prompt: Optional[Union[str, List[str]]] = None,
|
||||
height: int = 512,
|
||||
width: int = 512,
|
||||
num_inference_steps: int = 50,
|
||||
guidance_scale: float = 7.5,
|
||||
eta: float = 0.0,
|
||||
generator: Optional[torch.Generator] = None,
|
||||
latents: Optional[torch.FloatTensor] = None,
|
||||
output_type: Optional[str] = "pil",
|
||||
return_dict: bool = True,
|
||||
**kwargs,
|
||||
callback: Optional[Callable[[int, int, torch.FloatTensor], None]] = None,
|
||||
callback_steps: Optional[int] = 1,
|
||||
text_embeddings: Optional[torch.FloatTensor] = None,
|
||||
):
|
||||
r"""
|
||||
Function invoked when calling the pipeline for generation.
|
||||
|
||||
Args:
|
||||
prompt (`str` or `List[str]`):
|
||||
The prompt or prompts to guide the image generation.
|
||||
@ -138,11 +290,18 @@ class StableDiffusionPipeline(DiffusionPipeline):
|
||||
tensor will ge generated by sampling using the supplied random `generator`.
|
||||
output_type (`str`, *optional*, defaults to `"pil"`):
|
||||
The output format of the generate image. Choose between
|
||||
[PIL](https://pillow.readthedocs.io/en/stable/): `PIL.Image.Image` or `nd.array`.
|
||||
[PIL](https://pillow.readthedocs.io/en/stable/): `PIL.Image.Image` or `np.array`.
|
||||
return_dict (`bool`, *optional*, defaults to `True`):
|
||||
Whether or not to return a [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] instead of a
|
||||
plain tuple.
|
||||
|
||||
callback (`Callable`, *optional*):
|
||||
A function that will be called every `callback_steps` steps during inference. The function will be
|
||||
called with the following arguments: `callback(step: int, timestep: int, latents: torch.FloatTensor)`.
|
||||
callback_steps (`int`, *optional*, defaults to 1):
|
||||
The frequency at which the `callback` function will be called. If not specified, the callback will be
|
||||
called at every step.
|
||||
text_embeddings(`torch.FloatTensor`, *optional*):
|
||||
Pre-generated text embeddings.
|
||||
Returns:
|
||||
[`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] or `tuple`:
|
||||
[`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] if `return_dict` is True, otherwise a `tuple.
|
||||
@ -151,37 +310,44 @@ class StableDiffusionPipeline(DiffusionPipeline):
|
||||
(nsfw) content, according to the `safety_checker`.
|
||||
"""
|
||||
|
||||
if "torch_device" in kwargs:
|
||||
# device = kwargs.pop("torch_device")
|
||||
warnings.warn(
|
||||
"`torch_device` is deprecated as an input argument to `__call__` and will be removed in v0.3.0."
|
||||
" Consider using `pipe.to(torch_device)` instead."
|
||||
)
|
||||
|
||||
# Set device as before (to be removed in 0.3.0)
|
||||
# if device is None:
|
||||
# device = "cuda" if torch.cuda.is_available() else "cpu"
|
||||
# self.to(device)
|
||||
|
||||
if isinstance(prompt, str):
|
||||
batch_size = 1
|
||||
elif isinstance(prompt, list):
|
||||
batch_size = len(prompt)
|
||||
else:
|
||||
raise ValueError(f"`prompt` has to be of type `str` or `list` but is {type(prompt)}")
|
||||
|
||||
if height % 8 != 0 or width % 8 != 0:
|
||||
raise ValueError(f"`height` and `width` have to be divisible by 8 but are {height} and {width}.")
|
||||
|
||||
# get prompt text embeddings
|
||||
text_input = self.tokenizer(
|
||||
prompt,
|
||||
padding="max_length",
|
||||
max_length=self.tokenizer.model_max_length,
|
||||
truncation=True,
|
||||
return_tensors="pt",
|
||||
)
|
||||
text_embeddings = self.text_encoder(text_input.input_ids.to(self.text_encoder.device))[0].to(self.unet.device)
|
||||
if (callback_steps is None) or (
|
||||
callback_steps is not None and (not isinstance(callback_steps, int) or callback_steps <= 0)
|
||||
):
|
||||
raise ValueError(
|
||||
f"`callback_steps` has to be a positive integer but is {callback_steps} of type"
|
||||
f" {type(callback_steps)}."
|
||||
)
|
||||
|
||||
if text_embeddings is None:
|
||||
if isinstance(prompt, str):
|
||||
batch_size = 1
|
||||
elif isinstance(prompt, list):
|
||||
batch_size = len(prompt)
|
||||
else:
|
||||
raise ValueError(f"`prompt` has to be of type `str` or `list` but is {type(prompt)}")
|
||||
|
||||
# get prompt text embeddings
|
||||
text_inputs = self.tokenizer(
|
||||
prompt,
|
||||
padding="max_length",
|
||||
max_length=self.tokenizer.model_max_length,
|
||||
return_tensors="pt",
|
||||
)
|
||||
text_input_ids = text_inputs.input_ids
|
||||
|
||||
if text_input_ids.shape[-1] > self.tokenizer.model_max_length:
|
||||
removed_text = self.tokenizer.batch_decode(text_input_ids[:, self.tokenizer.model_max_length :])
|
||||
logger.warning(
|
||||
"The following part of your input was truncated because CLIP can only handle sequences up to"
|
||||
f" {self.tokenizer.model_max_length} tokens: {removed_text}"
|
||||
)
|
||||
text_input_ids = text_input_ids[:, : self.tokenizer.model_max_length]
|
||||
text_embeddings = self.text_encoder(text_input_ids.to(self.device))[0]
|
||||
else:
|
||||
batch_size = text_embeddings.shape[0]
|
||||
|
||||
# here `guidance_scale` is defined analog to the guidance weight `w` of equation (2)
|
||||
# of the Imagen paper: https://arxiv.org/pdf/2205.11487.pdf . `guidance_scale = 1`
|
||||
@ -189,13 +355,14 @@ class StableDiffusionPipeline(DiffusionPipeline):
|
||||
do_classifier_free_guidance = guidance_scale > 1.0
|
||||
# get unconditional embeddings for classifier free guidance
|
||||
if do_classifier_free_guidance:
|
||||
max_length = text_input.input_ids.shape[-1]
|
||||
# HACK - Not setting text_input_ids here when walking, so hard coding to max length of tokenizer
|
||||
# TODO - Determine if this is OK to do
|
||||
# max_length = text_input_ids.shape[-1]
|
||||
max_length = self.tokenizer.model_max_length
|
||||
uncond_input = self.tokenizer(
|
||||
[""] * batch_size, padding="max_length", max_length=max_length, return_tensors="pt"
|
||||
)
|
||||
uncond_embeddings = self.text_encoder(uncond_input.input_ids.to(self.text_encoder.device))[0].to(
|
||||
self.unet.device
|
||||
)
|
||||
uncond_embeddings = self.text_encoder(uncond_input.input_ids.to(self.device))[0]
|
||||
|
||||
# For classifier free guidance, we need to do two forward passes.
|
||||
# Here we concatenate the unconditional and text embeddings into a single batch
|
||||
@ -214,23 +381,22 @@ class StableDiffusionPipeline(DiffusionPipeline):
|
||||
latents_shape,
|
||||
generator=generator,
|
||||
device=latents_device,
|
||||
dtype=text_embeddings.dtype,
|
||||
)
|
||||
else:
|
||||
if latents.shape != latents_shape:
|
||||
raise ValueError(f"Unexpected latents shape, got {latents.shape}, expected {latents_shape}")
|
||||
latents = latents.to(self.device)
|
||||
latents = latents.to(latents_device)
|
||||
|
||||
# set timesteps
|
||||
accepts_offset = "offset" in set(inspect.signature(self.scheduler.set_timesteps).parameters.keys())
|
||||
extra_set_kwargs = {}
|
||||
if accepts_offset:
|
||||
extra_set_kwargs["offset"] = 1
|
||||
self.scheduler.set_timesteps(num_inference_steps)
|
||||
|
||||
self.scheduler.set_timesteps(num_inference_steps, **extra_set_kwargs)
|
||||
# Some schedulers like PNDM have timesteps as arrays
|
||||
# It's more optimized to move all timesteps to correct device beforehand
|
||||
timesteps_tensor = self.scheduler.timesteps.to(self.device)
|
||||
|
||||
# if we use LMSDiscreteScheduler, let's make sure latents are mulitplied by sigmas
|
||||
if isinstance(self.scheduler, LMSDiscreteScheduler):
|
||||
latents = latents * self.scheduler.sigmas[0]
|
||||
# scale the initial noise by the standard deviation required by the scheduler
|
||||
latents = latents * self.scheduler.init_noise_sigma
|
||||
|
||||
# prepare extra kwargs for the scheduler step, since not all schedulers have the same signature
|
||||
# eta (η) is only used with the DDIMScheduler, it will be ignored for other schedulers.
|
||||
@ -241,18 +407,13 @@ class StableDiffusionPipeline(DiffusionPipeline):
|
||||
if accepts_eta:
|
||||
extra_step_kwargs["eta"] = eta
|
||||
|
||||
for i, t in enumerate(self.progress_bar(self.scheduler.timesteps)):
|
||||
for i, t in enumerate(self.progress_bar(timesteps_tensor)):
|
||||
# expand the latents if we are doing classifier free guidance
|
||||
latent_model_input = torch.cat([latents] * 2) if do_classifier_free_guidance else latents
|
||||
if isinstance(self.scheduler, LMSDiscreteScheduler):
|
||||
sigma = self.scheduler.sigmas[i]
|
||||
# the model input needs to be scaled to match the continuous ODE formulation in K-LMS
|
||||
latent_model_input = latent_model_input / ((sigma**2 + 1) ** 0.5)
|
||||
latent_model_input = self.scheduler.scale_model_input(latent_model_input, t)
|
||||
|
||||
# predict the noise residual
|
||||
noise_pred = self.unet(
|
||||
latent_model_input.to(self.unet.device), t.to(self.unet.device), encoder_hidden_states=text_embeddings
|
||||
).sample
|
||||
noise_pred = self.unet(latent_model_input, t, encoder_hidden_states=text_embeddings).sample
|
||||
|
||||
# perform guidance
|
||||
if do_classifier_free_guidance:
|
||||
@ -260,29 +421,22 @@ class StableDiffusionPipeline(DiffusionPipeline):
|
||||
noise_pred = noise_pred_uncond + guidance_scale * (noise_pred_text - noise_pred_uncond)
|
||||
|
||||
# compute the previous noisy sample x_t -> x_t-1
|
||||
if isinstance(self.scheduler, LMSDiscreteScheduler):
|
||||
latents = self.scheduler.step(
|
||||
noise_pred, i, latents.to(self.unet.device), **extra_step_kwargs
|
||||
).prev_sample
|
||||
else:
|
||||
latents = self.scheduler.step(
|
||||
noise_pred, t.to(self.unet.device), latents.to(self.unet.device), **extra_step_kwargs
|
||||
).prev_sample
|
||||
latents = self.scheduler.step(noise_pred, t, latents, **extra_step_kwargs).prev_sample
|
||||
|
||||
# call the callback, if provided
|
||||
if callback is not None and i % callback_steps == 0:
|
||||
callback(i, t, latents)
|
||||
|
||||
# scale and decode the image latents with vae
|
||||
latents = 1 / 0.18215 * latents
|
||||
image = self.vae.decode(latents.to(self.vae.device)).sample
|
||||
image = self.vae.decode(latents).sample
|
||||
|
||||
image = (image / 2 + 0.5).clamp(0, 1)
|
||||
image = image.to(self.vae.device).to(self.vae.device).cpu().permute(0, 2, 3, 1).numpy()
|
||||
image = image.cpu().permute(0, 2, 3, 1).numpy()
|
||||
|
||||
# run safety checker
|
||||
safety_cheker_input = (
|
||||
self.feature_extractor(self.numpy_to_pil(image), return_tensors="pt")
|
||||
.to(self.vae.device)
|
||||
.to(self.vae.dtype)
|
||||
safety_checker_input = self.feature_extractor(self.numpy_to_pil(image), return_tensors="pt").to(self.device)
|
||||
image, has_nsfw_concept = self.safety_checker(
|
||||
images=image, clip_input=safety_checker_input.pixel_values.to(text_embeddings.dtype)
|
||||
)
|
||||
image, has_nsfw_concept = self.safety_checker(images=image, clip_input=safety_cheker_input.pixel_values)
|
||||
|
||||
if output_type == "pil":
|
||||
image = self.numpy_to_pil(image)
|
||||
@ -290,4 +444,370 @@ class StableDiffusionPipeline(DiffusionPipeline):
|
||||
if not return_dict:
|
||||
return (image, has_nsfw_concept)
|
||||
|
||||
return StableDiffusionPipelineOutput(images=image, nsfw_content_detected=has_nsfw_concept)
|
||||
return StableDiffusionPipelineOutput(images=image, nsfw_content_detected=has_nsfw_concept)
|
||||
|
||||
def generate_inputs(self, prompt_a, prompt_b, seed_a, seed_b, noise_shape, T, batch_size):
|
||||
embeds_a = self.embed_text(prompt_a)
|
||||
embeds_b = self.embed_text(prompt_b)
|
||||
latents_a = torch.randn(
|
||||
noise_shape,
|
||||
device=self.device,
|
||||
generator=torch.Generator(device=self.device).manual_seed(seed_a),
|
||||
)
|
||||
latents_b = torch.randn(
|
||||
noise_shape,
|
||||
device=self.device,
|
||||
generator=torch.Generator(device=self.device).manual_seed(seed_b),
|
||||
)
|
||||
|
||||
batch_idx = 0
|
||||
embeds_batch, noise_batch = None, None
|
||||
for i, t in enumerate(T):
|
||||
embeds = torch.lerp(embeds_a, embeds_b, t)
|
||||
noise = slerp(float(t), latents_a, latents_b)
|
||||
|
||||
embeds_batch = embeds if embeds_batch is None else torch.cat([embeds_batch, embeds])
|
||||
noise_batch = noise if noise_batch is None else torch.cat([noise_batch, noise])
|
||||
batch_is_ready = embeds_batch.shape[0] == batch_size or i + 1 == T.shape[0]
|
||||
if not batch_is_ready:
|
||||
continue
|
||||
yield batch_idx, embeds_batch, noise_batch
|
||||
batch_idx += 1
|
||||
del embeds_batch, noise_batch
|
||||
torch.cuda.empty_cache()
|
||||
embeds_batch, noise_batch = None, None
|
||||
|
||||
def generate_interpolation_clip(
|
||||
self,
|
||||
prompt_a: str,
|
||||
prompt_b: str,
|
||||
seed_a: int,
|
||||
seed_b: int,
|
||||
num_interpolation_steps: int = 5,
|
||||
save_path: Union[str, Path] = "outputs/",
|
||||
num_inference_steps: int = 50,
|
||||
guidance_scale: float = 7.5,
|
||||
eta: float = 0.0,
|
||||
height: int = 512,
|
||||
width: int = 512,
|
||||
upsample: bool = False,
|
||||
batch_size: int = 1,
|
||||
image_file_ext: str = ".png",
|
||||
T: np.ndarray = None,
|
||||
skip: int = 0,
|
||||
):
|
||||
save_path = Path(save_path)
|
||||
save_path.mkdir(parents=True, exist_ok=True)
|
||||
|
||||
T = T if T is not None else np.linspace(0.0, 1.0, num_interpolation_steps)
|
||||
if T.shape[0] != num_interpolation_steps:
|
||||
raise ValueError(f"Unexpected T shape, got {T.shape}, expected dim 0 to be {num_interpolation_steps}")
|
||||
|
||||
if upsample:
|
||||
if getattr(self, "upsampler", None) is None:
|
||||
self.upsampler = RealESRGANModel.from_pretrained("nateraw/real-esrgan")
|
||||
self.upsampler.to(self.device)
|
||||
|
||||
batch_generator = self.generate_inputs(
|
||||
prompt_a,
|
||||
prompt_b,
|
||||
seed_a,
|
||||
seed_b,
|
||||
(1, self.unet.in_channels, height // 8, width // 8),
|
||||
T[skip:],
|
||||
batch_size,
|
||||
)
|
||||
|
||||
frame_index = skip
|
||||
for _, embeds_batch, noise_batch in batch_generator:
|
||||
with torch.autocast("cuda"):
|
||||
outputs = self(
|
||||
latents=noise_batch,
|
||||
text_embeddings=embeds_batch,
|
||||
height=height,
|
||||
width=width,
|
||||
guidance_scale=guidance_scale,
|
||||
eta=eta,
|
||||
num_inference_steps=num_inference_steps,
|
||||
output_type="pil" if not upsample else "numpy",
|
||||
)["sample"]
|
||||
|
||||
for image in outputs:
|
||||
frame_filepath = save_path / (f"frame%06d{image_file_ext}" % frame_index)
|
||||
image = image if not upsample else self.upsampler(image)
|
||||
image.save(frame_filepath)
|
||||
frame_index += 1
|
||||
|
||||
def walk(
|
||||
self,
|
||||
prompts: Optional[List[str]] = None,
|
||||
seeds: Optional[List[int]] = None,
|
||||
num_interpolation_steps: Optional[Union[int, List[int]]] = 5, # int or list of int
|
||||
output_dir: Optional[str] = "./dreams",
|
||||
name: Optional[str] = None,
|
||||
image_file_ext: Optional[str] = ".png",
|
||||
fps: Optional[int] = 30,
|
||||
num_inference_steps: Optional[int] = 50,
|
||||
guidance_scale: Optional[float] = 7.5,
|
||||
eta: Optional[float] = 0.0,
|
||||
height: Optional[int] = 512,
|
||||
width: Optional[int] = 512,
|
||||
upsample: Optional[bool] = False,
|
||||
batch_size: Optional[int] = 1,
|
||||
resume: Optional[bool] = False,
|
||||
audio_filepath: str = None,
|
||||
audio_start_sec: Optional[Union[int, float]] = None,
|
||||
):
|
||||
"""Generate a video from a sequence of prompts and seeds. Optionally, add audio to the
|
||||
video to interpolate to the intensity of the audio.
|
||||
|
||||
Args:
|
||||
prompts (Optional[List[str]], optional):
|
||||
list of text prompts. Defaults to None.
|
||||
seeds (Optional[List[int]], optional):
|
||||
list of random seeds corresponding to prompts. Defaults to None.
|
||||
num_interpolation_steps (Union[int, List[int]], *optional*):
|
||||
How many interpolation steps between each prompt. Defaults to None.
|
||||
output_dir (Optional[str], optional):
|
||||
Where to save the video. Defaults to './dreams'.
|
||||
name (Optional[str], optional):
|
||||
Name of the subdirectory of output_dir. Defaults to None.
|
||||
image_file_ext (Optional[str], *optional*, defaults to '.png'):
|
||||
The extension to use when writing video frames.
|
||||
fps (Optional[int], *optional*, defaults to 30):
|
||||
The frames per second in the resulting output videos.
|
||||
num_inference_steps (Optional[int], *optional*, defaults to 50):
|
||||
The number of denoising steps. More denoising steps usually lead to a higher quality image at the
|
||||
expense of slower inference.
|
||||
guidance_scale (Optional[float], *optional*, defaults to 7.5):
|
||||
Guidance scale as defined in [Classifier-Free Diffusion Guidance](https://arxiv.org/abs/2207.12598).
|
||||
`guidance_scale` is defined as `w` of equation 2. of [Imagen
|
||||
Paper](https://arxiv.org/pdf/2205.11487.pdf). Guidance scale is enabled by setting `guidance_scale >
|
||||
1`. Higher guidance scale encourages to generate images that are closely linked to the text `prompt`,
|
||||
usually at the expense of lower image quality.
|
||||
eta (Optional[float], *optional*, defaults to 0.0):
|
||||
Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to
|
||||
[`schedulers.DDIMScheduler`], will be ignored for others.
|
||||
height (Optional[int], *optional*, defaults to 512):
|
||||
height of the images to generate.
|
||||
width (Optional[int], *optional*, defaults to 512):
|
||||
width of the images to generate.
|
||||
upsample (Optional[bool], *optional*, defaults to False):
|
||||
When True, upsamples images with realesrgan.
|
||||
batch_size (Optional[int], *optional*, defaults to 1):
|
||||
Number of images to generate at once.
|
||||
resume (Optional[bool], *optional*, defaults to False):
|
||||
When True, resumes from the last frame in the output directory based
|
||||
on available prompt config. Requires you to provide the `name` argument.
|
||||
audio_filepath (str, *optional*, defaults to None):
|
||||
Optional path to an audio file to influence the interpolation rate.
|
||||
audio_start_sec (Optional[Union[int, float]], *optional*, defaults to 0):
|
||||
Global start time of the provided audio_filepath.
|
||||
|
||||
This function will create sub directories for each prompt and seed pair.
|
||||
|
||||
For example, if you provide the following prompts and seeds:
|
||||
|
||||
```
|
||||
prompts = ['a', 'b', 'c']
|
||||
seeds = [1, 2, 3]
|
||||
num_interpolation_steps = 5
|
||||
output_dir = 'output_dir'
|
||||
name = 'name'
|
||||
fps = 5
|
||||
```
|
||||
|
||||
Then the following directories will be created:
|
||||
|
||||
```
|
||||
output_dir
|
||||
├── name
|
||||
│ ├── name_000000
|
||||
│ │ ├── frame000000.png
|
||||
│ │ ├── ...
|
||||
│ │ ├── frame000004.png
|
||||
│ │ ├── name_000000.mp4
|
||||
│ ├── name_000001
|
||||
│ │ ├── frame000000.png
|
||||
│ │ ├── ...
|
||||
│ │ ├── frame000004.png
|
||||
│ │ ├── name_000001.mp4
|
||||
│ ├── ...
|
||||
│ ├── name.mp4
|
||||
| |── prompt_config.json
|
||||
```
|
||||
|
||||
Returns:
|
||||
str: The resulting video filepath. This video includes all sub directories' video clips.
|
||||
"""
|
||||
|
||||
output_path = Path(output_dir)
|
||||
|
||||
name = name or time.strftime("%Y%m%d-%H%M%S")
|
||||
save_path_root = output_path / name
|
||||
save_path_root.mkdir(parents=True, exist_ok=True)
|
||||
|
||||
# Where the final video of all the clips combined will be saved
|
||||
output_filepath = save_path_root / f"{name}.mp4"
|
||||
|
||||
# If using same number of interpolation steps between, we turn into list
|
||||
if not resume and isinstance(num_interpolation_steps, int):
|
||||
num_interpolation_steps = [num_interpolation_steps] * (len(prompts) - 1)
|
||||
|
||||
if not resume:
|
||||
audio_start_sec = audio_start_sec or 0
|
||||
|
||||
# Save/reload prompt config
|
||||
prompt_config_path = save_path_root / "prompt_config.json"
|
||||
if not resume:
|
||||
prompt_config_path.write_text(
|
||||
json.dumps(
|
||||
dict(
|
||||
prompts=prompts,
|
||||
seeds=seeds,
|
||||
num_interpolation_steps=num_interpolation_steps,
|
||||
fps=fps,
|
||||
num_inference_steps=num_inference_steps,
|
||||
guidance_scale=guidance_scale,
|
||||
eta=eta,
|
||||
upsample=upsample,
|
||||
height=height,
|
||||
width=width,
|
||||
audio_filepath=audio_filepath,
|
||||
audio_start_sec=audio_start_sec,
|
||||
),
|
||||
indent=2,
|
||||
sort_keys=False,
|
||||
)
|
||||
)
|
||||
else:
|
||||
data = json.load(open(prompt_config_path))
|
||||
prompts = data["prompts"]
|
||||
seeds = data["seeds"]
|
||||
num_interpolation_steps = data["num_interpolation_steps"]
|
||||
fps = data["fps"]
|
||||
num_inference_steps = data["num_inference_steps"]
|
||||
guidance_scale = data["guidance_scale"]
|
||||
eta = data["eta"]
|
||||
upsample = data["upsample"]
|
||||
height = data["height"]
|
||||
width = data["width"]
|
||||
audio_filepath = data["audio_filepath"]
|
||||
audio_start_sec = data["audio_start_sec"]
|
||||
|
||||
for i, (prompt_a, prompt_b, seed_a, seed_b, num_step) in enumerate(
|
||||
zip(prompts, prompts[1:], seeds, seeds[1:], num_interpolation_steps)
|
||||
):
|
||||
# {name}_000000 / {name}_000001 / ...
|
||||
save_path = save_path_root / f"{name}_{i:06d}"
|
||||
|
||||
# Where the individual clips will be saved
|
||||
step_output_filepath = save_path / f"{name}_{i:06d}.mp4"
|
||||
|
||||
# Determine if we need to resume from a previous run
|
||||
skip = 0
|
||||
if resume:
|
||||
if step_output_filepath.exists():
|
||||
print(f"Skipping {save_path} because frames already exist")
|
||||
continue
|
||||
|
||||
existing_frames = sorted(save_path.glob(f"*{image_file_ext}"))
|
||||
if existing_frames:
|
||||
skip = int(existing_frames[-1].stem[-6:]) + 1
|
||||
if skip + 1 >= num_step:
|
||||
print(f"Skipping {save_path} because frames already exist")
|
||||
continue
|
||||
print(f"Resuming {save_path.name} from frame {skip}")
|
||||
|
||||
audio_offset = audio_start_sec + sum(num_interpolation_steps[:i]) / fps
|
||||
audio_duration = num_step / fps
|
||||
|
||||
self.generate_interpolation_clip(
|
||||
prompt_a,
|
||||
prompt_b,
|
||||
seed_a,
|
||||
seed_b,
|
||||
num_interpolation_steps=num_step,
|
||||
save_path=save_path,
|
||||
num_inference_steps=num_inference_steps,
|
||||
guidance_scale=guidance_scale,
|
||||
eta=eta,
|
||||
height=height,
|
||||
width=width,
|
||||
upsample=upsample,
|
||||
batch_size=batch_size,
|
||||
skip=skip,
|
||||
T=get_timesteps_arr(
|
||||
audio_filepath,
|
||||
offset=audio_offset,
|
||||
duration=audio_duration,
|
||||
fps=fps,
|
||||
margin=(1.0, 5.0),
|
||||
)
|
||||
if audio_filepath
|
||||
else None,
|
||||
)
|
||||
make_video_pyav(
|
||||
save_path,
|
||||
audio_filepath=audio_filepath,
|
||||
fps=fps,
|
||||
output_filepath=step_output_filepath,
|
||||
glob_pattern=f"*{image_file_ext}",
|
||||
audio_offset=audio_offset,
|
||||
audio_duration=audio_duration,
|
||||
sr=44100,
|
||||
)
|
||||
|
||||
return make_video_pyav(
|
||||
save_path_root,
|
||||
audio_filepath=audio_filepath,
|
||||
fps=fps,
|
||||
audio_offset=audio_start_sec,
|
||||
audio_duration=sum(num_interpolation_steps) / fps,
|
||||
output_filepath=output_filepath,
|
||||
glob_pattern=f"**/*{image_file_ext}",
|
||||
sr=44100,
|
||||
)
|
||||
|
||||
def embed_text(self, text):
|
||||
"""Helper to embed some text"""
|
||||
with torch.autocast("cuda"):
|
||||
text_input = self.tokenizer(
|
||||
text,
|
||||
padding="max_length",
|
||||
max_length=self.tokenizer.model_max_length,
|
||||
truncation=True,
|
||||
return_tensors="pt",
|
||||
)
|
||||
with torch.no_grad():
|
||||
embed = self.text_encoder(text_input.input_ids.to(self.device))[0]
|
||||
return embed
|
||||
|
||||
@classmethod
|
||||
def from_pretrained(cls, *args, tiled=False, **kwargs):
|
||||
"""Same as diffusers `from_pretrained` but with tiled option, which makes images tilable"""
|
||||
if tiled:
|
||||
|
||||
def patch_conv(**patch):
|
||||
cls = nn.Conv2d
|
||||
init = cls.__init__
|
||||
|
||||
def __init__(self, *args, **kwargs):
|
||||
return init(self, *args, **kwargs, **patch)
|
||||
|
||||
cls.__init__ = __init__
|
||||
|
||||
patch_conv(padding_mode="circular")
|
||||
|
||||
return super().from_pretrained(*args, **kwargs)
|
||||
|
||||
|
||||
class NoCheck(ModelMixin):
|
||||
"""Can be used in place of safety checker. Use responsibly and at your own risk."""
|
||||
|
||||
def __init__(self):
|
||||
super().__init__()
|
||||
self.register_parameter(name="asdf", param=torch.nn.Parameter(torch.randn(3)))
|
||||
|
||||
def forward(self, images=None, **kwargs):
|
||||
return images, [False]
|
||||
|
@ -12,7 +12,7 @@
|
||||
# GNU Affero General Public License for more details.
|
||||
|
||||
# You should have received a copy of the GNU Affero General Public License
|
||||
# along with this program. If not, see <http://www.gnu.org/licenses/>.
|
||||
# along with this program. If not, see <http://www.gnu.org/licenses/>.
|
||||
# base webui import and utils.
|
||||
from sd_utils import *
|
||||
|
||||
@ -28,7 +28,7 @@ from transformers import CLIPTextModel, CLIPTokenizer
|
||||
import argparse
|
||||
import itertools
|
||||
import math
|
||||
import os
|
||||
import os, sys
|
||||
import random
|
||||
#import datetime
|
||||
#from pathlib import Path
|
||||
@ -210,22 +210,22 @@ def freeze_params(params):
|
||||
param.requires_grad = False
|
||||
|
||||
|
||||
def save_resume_file(basepath, extra = {}, config=''):
|
||||
def save_resume_file(basepath, extra = {}, config=''):
|
||||
info = {"args": config["args"]}
|
||||
info["args"].update(extra)
|
||||
|
||||
|
||||
with open(f"{os.path.join(basepath, 'resume.json')}", "w") as f:
|
||||
#print (info)
|
||||
json.dump(info, f, indent=4)
|
||||
|
||||
|
||||
with open(f"{basepath}/token_identifier.txt", "w") as f:
|
||||
f.write(f"{config['args']['placeholder_token']}")
|
||||
|
||||
|
||||
with open(f"{basepath}/type_of_concept.txt", "w") as f:
|
||||
f.write(f"{config['args']['learnable_property']}")
|
||||
|
||||
|
||||
config['args'] = info["args"]
|
||||
|
||||
|
||||
return config['args']
|
||||
|
||||
class Checkpointer:
|
||||
@ -277,7 +277,7 @@ class Checkpointer:
|
||||
else:
|
||||
torch.save(learned_embeds_dict, f"{checkpoints_path}/{filename}")
|
||||
torch.save(learned_embeds_dict, f"{checkpoints_path}/last.bin")
|
||||
|
||||
|
||||
del unwrapped
|
||||
del learned_embeds
|
||||
|
||||
@ -286,15 +286,15 @@ class Checkpointer:
|
||||
def save_samples(self, step, text_encoder, height, width, guidance_scale, eta, num_inference_steps):
|
||||
samples_path = f"{self.output_dir}/concept_images"
|
||||
os.makedirs(samples_path, exist_ok=True)
|
||||
|
||||
|
||||
#if "checker" not in server_state['textual_inversion']:
|
||||
#with server_state_lock['textual_inversion']["checker"]:
|
||||
server_state['textual_inversion']["checker"] = NoCheck()
|
||||
|
||||
|
||||
#if "unwrapped" not in server_state['textual_inversion']:
|
||||
# with server_state_lock['textual_inversion']["unwrapped"]:
|
||||
server_state['textual_inversion']["unwrapped"] = self.accelerator.unwrap_model(text_encoder)
|
||||
|
||||
|
||||
#if "pipeline" not in server_state['textual_inversion']:
|
||||
# with server_state_lock['textual_inversion']["pipeline"]:
|
||||
# Save a sample image
|
||||
@ -309,7 +309,7 @@ class Checkpointer:
|
||||
safety_checker=NoCheck(),
|
||||
feature_extractor=CLIPFeatureExtractor.from_pretrained("openai/clip-vit-base-patch32"),
|
||||
).to("cuda")
|
||||
|
||||
|
||||
server_state['textual_inversion']["pipeline"].enable_attention_slicing()
|
||||
|
||||
if self.stable_sample_batches > 0:
|
||||
@ -333,7 +333,7 @@ class Checkpointer:
|
||||
num_inference_steps=num_inference_steps,
|
||||
output_type='pil'
|
||||
)["sample"]
|
||||
|
||||
|
||||
for idx, im in enumerate(samples):
|
||||
filename = f"stable_sample_%d_%d_step_%d.png" % (i+1, idx+1, step)
|
||||
im.save(f"{samples_path}/{filename}")
|
||||
@ -365,28 +365,28 @@ class Checkpointer:
|
||||
#@retry(RuntimeError, tries=5)
|
||||
def textual_inversion(config):
|
||||
print ("Running textual inversion.")
|
||||
|
||||
|
||||
#if "pipeline" in server_state["textual_inversion"]:
|
||||
#del server_state['textual_inversion']["checker"]
|
||||
#del server_state['textual_inversion']["unwrapped"]
|
||||
#del server_state['textual_inversion']["pipeline"]
|
||||
#torch.cuda.empty_cache()
|
||||
|
||||
|
||||
global_step_offset = 0
|
||||
|
||||
|
||||
#print(config['args']['resume_from'])
|
||||
if config['args']['resume_from']:
|
||||
try:
|
||||
basepath = f"{config['args']['resume_from']}"
|
||||
|
||||
|
||||
with open(f"{basepath}/resume.json", 'r') as f:
|
||||
state = json.load(f)
|
||||
|
||||
|
||||
global_step_offset = state["args"].get("global_step", 0)
|
||||
|
||||
|
||||
print("Resuming state from %s" % config['args']['resume_from'])
|
||||
print("We've trained %d steps so far" % global_step_offset)
|
||||
|
||||
|
||||
except json.decoder.JSONDecodeError:
|
||||
pass
|
||||
else:
|
||||
@ -398,7 +398,7 @@ def textual_inversion(config):
|
||||
gradient_accumulation_steps=config['args']['gradient_accumulation_steps'],
|
||||
mixed_precision=config['args']['mixed_precision']
|
||||
)
|
||||
|
||||
|
||||
# If passed along, set the training seed.
|
||||
if config['args']['seed']:
|
||||
set_seed(config['args']['seed'])
|
||||
@ -442,9 +442,9 @@ def textual_inversion(config):
|
||||
server_state['textual_inversion']["vae"] = AutoencoderKL.from_pretrained(
|
||||
config['args']['pretrained_model_name_or_path'] + '/vae',
|
||||
)
|
||||
|
||||
|
||||
#if "unet" not in server_state['textual_inversion']:
|
||||
#with server_state_lock['textual_inversion']["unet"]:
|
||||
#with server_state_lock['textual_inversion']["unet"]:
|
||||
server_state['textual_inversion']["unet"] = UNet2DConditionModel.from_pretrained(
|
||||
config['args']['pretrained_model_name_or_path'] + '/unet',
|
||||
)
|
||||
@ -640,18 +640,18 @@ def textual_inversion(config):
|
||||
"global_step": global_step + global_step_offset,
|
||||
"resume_checkpoint": f"{basepath}/checkpoints/last.bin"
|
||||
}, config)
|
||||
|
||||
|
||||
checkpointer.save_samples(
|
||||
global_step + global_step_offset,
|
||||
server_state['textual_inversion']["text_encoder"],
|
||||
config['args']['resolution'], config['args'][
|
||||
'resolution'], 7.5, 0.0, config['args']['sample_steps'])
|
||||
|
||||
|
||||
checkpointer.checkpoint(
|
||||
global_step + global_step_offset,
|
||||
server_state['textual_inversion']["text_encoder"],
|
||||
path=f"{basepath}/learned_embeds.bin"
|
||||
)
|
||||
)
|
||||
#except KeyError:
|
||||
#raise StopException
|
||||
|
||||
@ -659,7 +659,7 @@ def textual_inversion(config):
|
||||
progress_bar.set_postfix(**logs)
|
||||
|
||||
#accelerator.log(logs, step=global_step)
|
||||
|
||||
|
||||
#try:
|
||||
if global_step >= config['args']['max_train_steps']:
|
||||
break
|
||||
@ -686,166 +686,166 @@ def textual_inversion(config):
|
||||
|
||||
except (KeyboardInterrupt, StopException) as e:
|
||||
print(f"Received Streamlit StopException or KeyboardInterrupt")
|
||||
|
||||
|
||||
if accelerator.is_main_process:
|
||||
print("Interrupted, saving checkpoint and resume state...")
|
||||
checkpointer.checkpoint(global_step + global_step_offset, server_state['textual_inversion']["text_encoder"])
|
||||
|
||||
|
||||
config['args'] = save_resume_file(basepath, {
|
||||
"global_step": global_step + global_step_offset,
|
||||
"resume_checkpoint": f"{basepath}/checkpoints/last.bin"
|
||||
}, config)
|
||||
|
||||
|
||||
|
||||
|
||||
checkpointer.checkpoint(
|
||||
global_step + global_step_offset,
|
||||
server_state['textual_inversion']["text_encoder"],
|
||||
path=f"{basepath}/learned_embeds.bin"
|
||||
)
|
||||
|
||||
|
||||
quit()
|
||||
|
||||
|
||||
def layout():
|
||||
|
||||
|
||||
with st.form("textual-inversion"):
|
||||
#st.info("Under Construction. :construction_worker:")
|
||||
#parser = argparse.ArgumentParser(description="Simple example of a training script.")
|
||||
|
||||
|
||||
set_page_title("Textual Inversion - Stable Diffusion Playground")
|
||||
|
||||
|
||||
config_tab, output_tab, tensorboard_tab = st.tabs(["Textual Inversion Config", "Ouput", "TensorBoard"])
|
||||
|
||||
|
||||
with config_tab:
|
||||
col1, col2, col3, col4, col5 = st.columns(5, gap='large')
|
||||
|
||||
|
||||
if "textual_inversion" not in st.session_state:
|
||||
st.session_state["textual_inversion"] = {}
|
||||
|
||||
|
||||
if "textual_inversion" not in server_state:
|
||||
server_state["textual_inversion"] = {}
|
||||
|
||||
|
||||
if "args" not in st.session_state["textual_inversion"]:
|
||||
st.session_state["textual_inversion"]["args"] = {}
|
||||
|
||||
|
||||
|
||||
|
||||
with col1:
|
||||
st.session_state["textual_inversion"]["args"]["pretrained_model_name_or_path"] = st.text_input("Pretrained Model Path",
|
||||
value=st.session_state["defaults"].textual_inversion.pretrained_model_name_or_path,
|
||||
help="Path to pretrained model or model identifier from huggingface.co/models.")
|
||||
|
||||
st.session_state["textual_inversion"]["args"]["tokenizer_name"] = st.text_input("Tokenizer Name",
|
||||
value=st.session_state["defaults"].textual_inversion.tokenizer_name,
|
||||
|
||||
st.session_state["textual_inversion"]["args"]["tokenizer_name"] = st.text_input("Tokenizer Name",
|
||||
value=st.session_state["defaults"].textual_inversion.tokenizer_name,
|
||||
help="Pretrained tokenizer name or path if not the same as model_name")
|
||||
|
||||
|
||||
st.session_state["textual_inversion"]["args"]["train_data_dir"] = st.text_input("train_data_dir", value="", help="A folder containing the training data.")
|
||||
|
||||
|
||||
st.session_state["textual_inversion"]["args"]["placeholder_token"] = st.text_input("Placeholder Token", value="", help="A token to use as a placeholder for the concept.")
|
||||
|
||||
|
||||
st.session_state["textual_inversion"]["args"]["initializer_token"] = st.text_input("Initializer Token", value="", help="A token to use as initializer word.")
|
||||
|
||||
|
||||
st.session_state["textual_inversion"]["args"]["learnable_property"] = st.selectbox("Learnable Property", ["object", "style"], index=0, help="Choose between 'object' and 'style'")
|
||||
|
||||
|
||||
st.session_state["textual_inversion"]["args"]["repeats"] = int(st.text_input("Number of times to Repeat", value=100, help="How many times to repeat the training data."))
|
||||
|
||||
|
||||
with col2:
|
||||
st.session_state["textual_inversion"]["args"]["output_dir"] = st.text_input("Output Directory",
|
||||
value=str(os.path.join("outputs", "textual_inversion")),
|
||||
help="The output directory where the model predictions and checkpoints will be written.")
|
||||
|
||||
|
||||
st.session_state["textual_inversion"]["args"]["seed"] = seed_to_int(st.text_input("Seed", value=0,
|
||||
help="A seed for reproducible training, if left empty a random one will be generated. Default: 0"))
|
||||
|
||||
|
||||
st.session_state["textual_inversion"]["args"]["resolution"] = int(st.text_input("Resolution", value=512,
|
||||
help="The resolution for input images, all the images in the train/validation dataset will be resized to this resolution"))
|
||||
|
||||
|
||||
st.session_state["textual_inversion"]["args"]["center_crop"] = st.checkbox("Center Image", value=True, help="Whether to center crop images before resizing to resolution")
|
||||
|
||||
|
||||
st.session_state["textual_inversion"]["args"]["train_batch_size"] = int(st.text_input("Train Batch Size", value=1, help="Batch size (per device) for the training dataloader."))
|
||||
|
||||
|
||||
st.session_state["textual_inversion"]["args"]["num_train_epochs"] = int(st.text_input("Number of Steps to Train", value=100, help="Number of steps to train."))
|
||||
|
||||
|
||||
st.session_state["textual_inversion"]["args"]["max_train_steps"] = int(st.text_input("Max Number of Steps to Train", value=5000,
|
||||
help="Total number of training steps to perform. If provided, overrides 'Number of Steps to Train'."))
|
||||
|
||||
|
||||
with col3:
|
||||
st.session_state["textual_inversion"]["args"]["gradient_accumulation_steps"] = int(st.text_input("Gradient Accumulation Steps", value=1,
|
||||
help="Number of updates steps to accumulate before performing a backward/update pass."))
|
||||
|
||||
|
||||
st.session_state["textual_inversion"]["args"]["learning_rate"] = float(st.text_input("Learning Rate", value=5.0e-04,
|
||||
help="Initial learning rate (after the potential warmup period) to use."))
|
||||
|
||||
|
||||
st.session_state["textual_inversion"]["args"]["scale_lr"] = st.checkbox("Scale Learning Rate", value=True,
|
||||
help="Scale the learning rate by the number of GPUs, gradient accumulation steps, and batch size.")
|
||||
|
||||
|
||||
st.session_state["textual_inversion"]["args"]["lr_scheduler"] = st.text_input("Learning Rate Scheduler", value="constant",
|
||||
help=("The scheduler type to use. Choose between ['linear', 'cosine', 'cosine_with_restarts', 'polynomial',"
|
||||
" 'constant', 'constant_with_warmup']" ))
|
||||
|
||||
|
||||
st.session_state["textual_inversion"]["args"]["lr_warmup_steps"] = int(st.text_input("Learning Rate Warmup Steps", value=500, help="Number of steps for the warmup in the lr scheduler."))
|
||||
|
||||
|
||||
st.session_state["textual_inversion"]["args"]["adam_beta1"] = float(st.text_input("Adam Beta 1", value=0.9, help="The beta1 parameter for the Adam optimizer."))
|
||||
|
||||
|
||||
st.session_state["textual_inversion"]["args"]["adam_beta2"] = float(st.text_input("Adam Beta 2", value=0.999, help="The beta2 parameter for the Adam optimizer."))
|
||||
|
||||
|
||||
st.session_state["textual_inversion"]["args"]["adam_weight_decay"] = float(st.text_input("Adam Weight Decay", value=1e-2, help="Weight decay to use."))
|
||||
|
||||
|
||||
st.session_state["textual_inversion"]["args"]["adam_epsilon"] = float(st.text_input("Adam Epsilon", value=1e-08, help="Epsilon value for the Adam optimizer"))
|
||||
|
||||
|
||||
with col4:
|
||||
st.session_state["textual_inversion"]["args"]["mixed_precision"] = st.selectbox("Mixed Precision", ["no", "fp16", "bf16"], index=1,
|
||||
help="Whether to use mixed precision. Choose" "between fp16 and bf16 (bfloat16). Bf16 requires PyTorch >= 1.10."
|
||||
"and an Nvidia Ampere GPU.")
|
||||
|
||||
|
||||
st.session_state["textual_inversion"]["args"]["local_rank"] = int(st.text_input("Local Rank", value=1, help="For distributed training: local_rank"))
|
||||
|
||||
|
||||
st.session_state["textual_inversion"]["args"]["checkpoint_frequency"] = int(st.text_input("Checkpoint Frequency", value=500, help="How often to save a checkpoint and sample image"))
|
||||
|
||||
|
||||
# stable_sample_batches is crashing when saving the samples so for now I will disable it util its fixed.
|
||||
#st.session_state["textual_inversion"]["args"]["stable_sample_batches"] = int(st.text_input("Stable Sample Batches", value=0,
|
||||
#help="Number of fixed seed sample batches to generate per checkpoint"))
|
||||
|
||||
st.session_state["textual_inversion"]["args"]["stable_sample_batches"] = 0
|
||||
|
||||
|
||||
st.session_state["textual_inversion"]["args"]["stable_sample_batches"] = 0
|
||||
|
||||
st.session_state["textual_inversion"]["args"]["random_sample_batches"] = int(st.text_input("Random Sample Batches", value=2,
|
||||
help="Number of random seed sample batches to generate per checkpoint"))
|
||||
|
||||
|
||||
st.session_state["textual_inversion"]["args"]["sample_batch_size"] = int(st.text_input("Sample Batch Size", value=1, help="Number of samples to generate per batch"))
|
||||
|
||||
|
||||
st.session_state["textual_inversion"]["args"]["sample_steps"] = int(st.text_input("Sample Steps", value=100,
|
||||
help="Number of steps for sample generation. Higher values will result in more detailed samples, but longer runtimes."))
|
||||
|
||||
|
||||
st.session_state["textual_inversion"]["args"]["custom_templates"] = st.text_input("Custom Templates", value="",
|
||||
help="A semicolon-delimited list of custom template to use for samples, using {} as a placeholder for the concept.")
|
||||
with col5:
|
||||
with col5:
|
||||
st.session_state["textual_inversion"]["args"]["resume"] = st.checkbox(label="Resume Previous Run?", value=False,
|
||||
help="Resume previous run, if a valid resume.json file is on the output dir \
|
||||
it will be used, otherwise if the 'Resume From' field bellow contains a valid resume.json file \
|
||||
that one will be used.")
|
||||
|
||||
|
||||
st.session_state["textual_inversion"]["args"]["resume_from"] = st.text_input(label="Resume From", help="Path to a directory to resume training from (ie, logs/token_name)")
|
||||
|
||||
|
||||
#st.session_state["textual_inversion"]["args"]["resume_checkpoint"] = st.file_uploader("Resume Checkpoint", type=["bin"],
|
||||
#help="Path to a specific checkpoint to resume training from (ie, logs/token_name/checkpoints/something.bin).")
|
||||
|
||||
|
||||
#st.session_state["textual_inversion"]["args"]["st.session_state["textual_inversion"]"] = st.file_uploader("st.session_state["textual_inversion"] File", type=["json"],
|
||||
#help="Path to a JSON st.session_state["textual_inversion"]uration file containing arguments for invoking this script."
|
||||
#"If resume_from is given, its resume.json takes priority over this.")
|
||||
#
|
||||
#
|
||||
#print (os.path.join(st.session_state["textual_inversion"]["args"]["output_dir"],st.session_state["textual_inversion"]["args"]["placeholder_token"].strip("<>"),"resume.json"))
|
||||
#print (os.path.exists(os.path.join(st.session_state["textual_inversion"]["args"]["output_dir"],st.session_state["textual_inversion"]["args"]["placeholder_token"].strip("<>"),"resume.json")))
|
||||
if os.path.exists(os.path.join(st.session_state["textual_inversion"]["args"]["output_dir"],st.session_state["textual_inversion"]["args"]["placeholder_token"].strip("<>"),"resume.json")):
|
||||
st.session_state["textual_inversion"]["args"]["resume_from"] = os.path.join(
|
||||
st.session_state["textual_inversion"]["args"]["output_dir"], st.session_state["textual_inversion"]["args"]["placeholder_token"].strip("<>"))
|
||||
#print (st.session_state["textual_inversion"]["args"]["resume_from"])
|
||||
|
||||
|
||||
if os.path.exists(os.path.join(st.session_state["textual_inversion"]["args"]["output_dir"],st.session_state["textual_inversion"]["args"]["placeholder_token"].strip("<>"), "checkpoints","last.bin")):
|
||||
st.session_state["textual_inversion"]["args"]["resume_checkpoint"] = os.path.join(
|
||||
st.session_state["textual_inversion"]["args"]["output_dir"], st.session_state["textual_inversion"]["args"]["placeholder_token"].strip("<>"), "checkpoints","last.bin")
|
||||
|
||||
st.session_state["textual_inversion"]["args"]["output_dir"], st.session_state["textual_inversion"]["args"]["placeholder_token"].strip("<>"), "checkpoints","last.bin")
|
||||
|
||||
#if "resume_from" in st.session_state["textual_inversion"]["args"]:
|
||||
#if st.session_state["textual_inversion"]["args"]["resume_from"]:
|
||||
#if os.path.exists(os.path.join(st.session_state["textual_inversion"]['args']['resume_from'], "resume.json")):
|
||||
#if os.path.exists(os.path.join(st.session_state["textual_inversion"]['args']['resume_from'], "resume.json")):
|
||||
#with open(os.path.join(st.session_state["textual_inversion"]['args']['resume_from'], "resume.json"), 'rt') as f:
|
||||
#try:
|
||||
#resume_json = json.load(f)["args"]
|
||||
@ -854,87 +854,86 @@ def layout():
|
||||
#st.session_state["textual_inversion"]["args"]["output_dir"], st.session_state["textual_inversion"]["args"]["placeholder_token"].strip("<>"))
|
||||
#except json.decoder.JSONDecodeError:
|
||||
#pass
|
||||
|
||||
|
||||
#print(st.session_state["textual_inversion"]["args"])
|
||||
#print(st.session_state["textual_inversion"]["args"]['resume_from'])
|
||||
|
||||
|
||||
#elif st.session_state["textual_inversion"]["args"]["st.session_state["textual_inversion"]"] is not None:
|
||||
#with open(st.session_state["textual_inversion"]["args"]["st.session_state["textual_inversion"]"], 'rt') as f:
|
||||
#args = parser.parse_args(namespace=argparse.Namespace(**json.load(f)["args"]))
|
||||
|
||||
|
||||
env_local_rank = int(os.environ.get("LOCAL_RANK", -1))
|
||||
if env_local_rank != -1 and env_local_rank != st.session_state["textual_inversion"]["args"]["local_rank"]:
|
||||
st.session_state["textual_inversion"]["args"]["local_rank"] = env_local_rank
|
||||
|
||||
|
||||
if st.session_state["textual_inversion"]["args"]["train_data_dir"] is None:
|
||||
st.error("You must specify --train_data_dir")
|
||||
|
||||
|
||||
if st.session_state["textual_inversion"]["args"]["pretrained_model_name_or_path"] is None:
|
||||
st.error("You must specify --pretrained_model_name_or_path")
|
||||
|
||||
|
||||
if st.session_state["textual_inversion"]["args"]["placeholder_token"] is None:
|
||||
st.error("You must specify --placeholder_token")
|
||||
|
||||
|
||||
if st.session_state["textual_inversion"]["args"]["initializer_token"] is None:
|
||||
st.error("You must specify --initializer_token")
|
||||
|
||||
|
||||
if st.session_state["textual_inversion"]["args"]["output_dir"] is None:
|
||||
st.error("You must specify --output_dir")
|
||||
|
||||
|
||||
# add a spacer and the submit button for the form.
|
||||
|
||||
|
||||
st.session_state["textual_inversion"]["message"] = st.empty()
|
||||
st.session_state["textual_inversion"]["progress_bar"] = st.empty()
|
||||
|
||||
|
||||
st.write("---")
|
||||
|
||||
|
||||
submit = st.form_submit_button("Run",help="")
|
||||
if submit:
|
||||
if "pipe" in st.session_state:
|
||||
del st.session_state["pipe"]
|
||||
if "model" in st.session_state:
|
||||
del st.session_state["model"]
|
||||
|
||||
|
||||
set_page_title("Running Textual Inversion - Stable Diffusion WebUI")
|
||||
#st.session_state["textual_inversion"]["message"].info("Textual Inversion Running. For more info check the progress on your console or the Ouput Tab.")
|
||||
|
||||
|
||||
try:
|
||||
#try:
|
||||
# run textual inversion.
|
||||
config = st.session_state['textual_inversion']
|
||||
textual_inversion(config)
|
||||
textual_inversion(config)
|
||||
#except RuntimeError:
|
||||
#if "pipeline" in server_state["textual_inversion"]:
|
||||
#del server_state['textual_inversion']["checker"]
|
||||
#del server_state['textual_inversion']["unwrapped"]
|
||||
#del server_state['textual_inversion']["pipeline"]
|
||||
|
||||
#del server_state['textual_inversion']["pipeline"]
|
||||
|
||||
# run textual inversion.
|
||||
#config = st.session_state['textual_inversion']
|
||||
#textual_inversion(config)
|
||||
|
||||
#textual_inversion(config)
|
||||
|
||||
set_page_title("Textual Inversion - Stable Diffusion WebUI")
|
||||
|
||||
|
||||
except StopException:
|
||||
set_page_title("Textual Inversion - Stable Diffusion WebUI")
|
||||
print(f"Received Streamlit StopException")
|
||||
|
||||
|
||||
st.session_state["textual_inversion"]["message"].empty()
|
||||
|
||||
|
||||
#
|
||||
with output_tab:
|
||||
st.info("Under Construction. :construction_worker:")
|
||||
|
||||
|
||||
#st.info("Nothing to show yet. Maybe try running some training first.")
|
||||
|
||||
|
||||
#st.session_state["textual_inversion"]["preview_image"] = st.empty()
|
||||
#st.session_state["textual_inversion"]["progress_bar"] = st.empty()
|
||||
|
||||
|
||||
#st.session_state["textual_inversion"]["progress_bar"] = st.empty()
|
||||
|
||||
|
||||
with tensorboard_tab:
|
||||
#st.info("Under Construction. :construction_worker:")
|
||||
|
||||
|
||||
# Start TensorBoard
|
||||
st_tensorboard(logdir=os.path.join("outputs", "textual_inversion"), port=8888)
|
||||
|
||||
|
||||
|
||||
|
@ -25,16 +25,19 @@ from streamlit.elements.image import image_to_url
|
||||
|
||||
#other imports
|
||||
import uuid
|
||||
from typing import Union
|
||||
from ldm.models.diffusion.ddim import DDIMSampler
|
||||
from ldm.models.diffusion.plms import PLMSSampler
|
||||
|
||||
# streamlit components
|
||||
from custom_components import key_phrase_suggestions
|
||||
|
||||
# Temp imports
|
||||
|
||||
|
||||
# end of imports
|
||||
#---------------------------------------------------------------------------------------------------------------
|
||||
|
||||
key_phrase_suggestions.init()
|
||||
|
||||
try:
|
||||
# this silences the annoying "Some weights of the model checkpoint were not used when initializing..." message at start.
|
||||
@ -90,81 +93,299 @@ class plugin_info():
|
||||
isTab = True
|
||||
displayPriority = 1
|
||||
|
||||
@logger.catch(reraise=True)
|
||||
def stable_horde(outpath, prompt, seed, sampler_name, save_grid, batch_size,
|
||||
n_iter, steps, cfg_scale, width, height, prompt_matrix, use_GFPGAN, GFPGAN_model,
|
||||
use_RealESRGAN, realesrgan_model_name, use_LDSR,
|
||||
LDSR_model_name, ddim_eta, normalize_prompt_weights,
|
||||
save_individual_images, sort_samples, write_info_files,
|
||||
jpg_sample, variant_amount, variant_seed, api_key,
|
||||
nsfw=True, censor_nsfw=False):
|
||||
|
||||
log = []
|
||||
|
||||
log.append("Generating image with Stable Horde.")
|
||||
|
||||
st.session_state["progress_bar_text"].code('\n'.join(str(log)), language='')
|
||||
|
||||
# start time after garbage collection (or before?)
|
||||
start_time = time.time()
|
||||
|
||||
# We will use this date here later for the folder name, need to start_time if not need
|
||||
run_start_dt = datetime.datetime.now()
|
||||
|
||||
mem_mon = MemUsageMonitor('MemMon')
|
||||
mem_mon.start()
|
||||
|
||||
os.makedirs(outpath, exist_ok=True)
|
||||
|
||||
sample_path = os.path.join(outpath, "samples")
|
||||
os.makedirs(sample_path, exist_ok=True)
|
||||
|
||||
params = {
|
||||
"sampler_name": "k_euler",
|
||||
"toggles": [1,4],
|
||||
"cfg_scale": cfg_scale,
|
||||
"seed": str(seed),
|
||||
"width": width,
|
||||
"height": height,
|
||||
"seed_variation": variant_seed if variant_seed else 1,
|
||||
"steps": int(steps),
|
||||
"n": int(n_iter)
|
||||
# You can put extra params here if you wish
|
||||
}
|
||||
|
||||
final_submit_dict = {
|
||||
"prompt": prompt,
|
||||
"params": params,
|
||||
"nsfw": nsfw,
|
||||
"censor_nsfw": censor_nsfw,
|
||||
"trusted_workers": True,
|
||||
"workers": []
|
||||
}
|
||||
log.append(final_submit_dict)
|
||||
|
||||
headers = {"apikey": api_key}
|
||||
logger.debug(final_submit_dict)
|
||||
st.session_state["progress_bar_text"].code('\n'.join(str(log)), language='')
|
||||
|
||||
horde_url = "https://stablehorde.net"
|
||||
|
||||
submit_req = requests.post(f'{horde_url}/api/v2/generate/async', json = final_submit_dict, headers = headers)
|
||||
if submit_req.ok:
|
||||
submit_results = submit_req.json()
|
||||
logger.debug(submit_results)
|
||||
|
||||
log.append(submit_results)
|
||||
st.session_state["progress_bar_text"].code('\n'.join(str(log)), language='')
|
||||
|
||||
req_id = submit_results['id']
|
||||
is_done = False
|
||||
while not is_done:
|
||||
chk_req = requests.get(f'{horde_url}/api/v2/generate/check/{req_id}')
|
||||
if not chk_req.ok:
|
||||
logger.error(chk_req.text)
|
||||
return
|
||||
chk_results = chk_req.json()
|
||||
logger.info(chk_results)
|
||||
is_done = chk_results['done']
|
||||
time.sleep(1)
|
||||
retrieve_req = requests.get(f'{horde_url}/api/v2/generate/status/{req_id}')
|
||||
if not retrieve_req.ok:
|
||||
logger.error(retrieve_req.text)
|
||||
return
|
||||
results_json = retrieve_req.json()
|
||||
# logger.debug(results_json)
|
||||
results = results_json['generations']
|
||||
|
||||
output_images = []
|
||||
comments = []
|
||||
prompt_matrix_parts = []
|
||||
|
||||
if not st.session_state['defaults'].general.no_verify_input:
|
||||
try:
|
||||
check_prompt_length(prompt, comments)
|
||||
except:
|
||||
import traceback
|
||||
logger.info("Error verifying input:", file=sys.stderr)
|
||||
logger.info(traceback.format_exc(), file=sys.stderr)
|
||||
|
||||
all_prompts = batch_size * n_iter * [prompt]
|
||||
all_seeds = [seed + x for x in range(len(all_prompts))]
|
||||
|
||||
for iter in range(len(results)):
|
||||
b64img = results[iter]["img"]
|
||||
base64_bytes = b64img.encode('utf-8')
|
||||
img_bytes = base64.b64decode(base64_bytes)
|
||||
img = Image.open(BytesIO(img_bytes))
|
||||
|
||||
sanitized_prompt = slugify(prompt)
|
||||
|
||||
prompts = all_prompts[iter * batch_size:(iter + 1) * batch_size]
|
||||
#captions = prompt_matrix_parts[n * batch_size:(n + 1) * batch_size]
|
||||
seeds = all_seeds[iter * batch_size:(iter + 1) * batch_size]
|
||||
|
||||
if sort_samples:
|
||||
full_path = os.path.join(os.getcwd(), sample_path, sanitized_prompt)
|
||||
|
||||
|
||||
sanitized_prompt = sanitized_prompt[:200-len(full_path)]
|
||||
sample_path_i = os.path.join(sample_path, sanitized_prompt)
|
||||
|
||||
#print(f"output folder length: {len(os.path.join(os.getcwd(), sample_path_i))}")
|
||||
#print(os.path.join(os.getcwd(), sample_path_i))
|
||||
|
||||
os.makedirs(sample_path_i, exist_ok=True)
|
||||
base_count = get_next_sequence_number(sample_path_i)
|
||||
filename = f"{base_count:05}-{steps}_{sampler_name}_{seeds[iter]}"
|
||||
else:
|
||||
full_path = os.path.join(os.getcwd(), sample_path)
|
||||
sample_path_i = sample_path
|
||||
base_count = get_next_sequence_number(sample_path_i)
|
||||
filename = f"{base_count:05}-{steps}_{sampler_name}_{seed}_{sanitized_prompt}"[:200-len(full_path)] #same as before
|
||||
|
||||
save_sample(img, sample_path_i, filename, jpg_sample, prompts, seeds, width, height, steps, cfg_scale,
|
||||
normalize_prompt_weights, use_GFPGAN, write_info_files, prompt_matrix, init_img=None,
|
||||
denoising_strength=0.75, resize_mode=None, uses_loopback=False, uses_random_seed_loopback=False,
|
||||
save_grid=save_grid,
|
||||
sort_samples=sampler_name, sampler_name=sampler_name, ddim_eta=ddim_eta, n_iter=n_iter,
|
||||
batch_size=batch_size, i=iter, save_individual_images=save_individual_images,
|
||||
model_name="Stable Diffusion v1.5")
|
||||
|
||||
output_images.append(img)
|
||||
|
||||
# update image on the UI so we can see the progress
|
||||
if "preview_image" in st.session_state:
|
||||
st.session_state["preview_image"].image(img)
|
||||
|
||||
if "progress_bar_text" in st.session_state:
|
||||
st.session_state["progress_bar_text"].empty()
|
||||
|
||||
#if len(results) > 1:
|
||||
#final_filename = f"{iter}_{filename}"
|
||||
#img.save(final_filename)
|
||||
#logger.info(f"Saved {final_filename}")
|
||||
else:
|
||||
if "progress_bar_text" in st.session_state:
|
||||
st.session_state["progress_bar_text"].error(submit_req.text)
|
||||
|
||||
logger.error(submit_req.text)
|
||||
|
||||
mem_max_used, mem_total = mem_mon.read_and_stop()
|
||||
time_diff = time.time()-start_time
|
||||
|
||||
info = f"""
|
||||
{prompt}
|
||||
Steps: {steps}, Sampler: {sampler_name}, CFG scale: {cfg_scale}, Seed: {seed}{', GFPGAN' if use_GFPGAN else ''}{', '+realesrgan_model_name if use_RealESRGAN else ''}
|
||||
{', Prompt Matrix Mode.' if prompt_matrix else ''}""".strip()
|
||||
|
||||
stats = f'''
|
||||
Took { round(time_diff, 2) }s total ({ round(time_diff/(len(all_prompts)),2) }s per image)
|
||||
Peak memory usage: { -(mem_max_used // -1_048_576) } MiB / { -(mem_total // -1_048_576) } MiB / { round(mem_max_used/mem_total*100, 3) }%'''
|
||||
|
||||
for comment in comments:
|
||||
info += "\n\n" + comment
|
||||
|
||||
#mem_mon.stop()
|
||||
#del mem_mon
|
||||
torch_gc()
|
||||
|
||||
return output_images, seed, info, stats
|
||||
|
||||
|
||||
#
|
||||
@logger.catch(reraise=True)
|
||||
def txt2img(prompt: str, ddim_steps: int, sampler_name: str, n_iter: int, batch_size: int, cfg_scale: float, seed: Union[int, str, None],
|
||||
height: int, width: int, separate_prompts:bool = False, normalize_prompt_weights:bool = True,
|
||||
save_individual_images: bool = True, save_grid: bool = True, group_by_prompt: bool = True,
|
||||
save_as_jpg: bool = True, use_GFPGAN: bool = True, GFPGAN_model: str = 'GFPGANv1.3', use_RealESRGAN: bool = False,
|
||||
RealESRGAN_model: str = "RealESRGAN_x4plus_anime_6B", use_LDSR: bool = True, LDSR_model: str = "model",
|
||||
fp = None, variant_amount: float = None,
|
||||
variant_seed: int = None, ddim_eta:float = 0.0, write_info_files:bool = True):
|
||||
fp = None, variant_amount: float = 0.0,
|
||||
variant_seed: int = None, ddim_eta:float = 0.0, write_info_files:bool = True,
|
||||
use_stable_horde: bool = False, stable_horde_key:str = ''):
|
||||
|
||||
outpath = st.session_state['defaults'].general.outdir_txt2img
|
||||
|
||||
seed = seed_to_int(seed)
|
||||
|
||||
if sampler_name == 'PLMS':
|
||||
sampler = PLMSSampler(server_state["model"])
|
||||
elif sampler_name == 'DDIM':
|
||||
sampler = DDIMSampler(server_state["model"])
|
||||
elif sampler_name == 'k_dpm_2_a':
|
||||
sampler = KDiffusionSampler(server_state["model"],'dpm_2_ancestral')
|
||||
elif sampler_name == 'k_dpm_2':
|
||||
sampler = KDiffusionSampler(server_state["model"],'dpm_2')
|
||||
elif sampler_name == 'k_euler_a':
|
||||
sampler = KDiffusionSampler(server_state["model"],'euler_ancestral')
|
||||
elif sampler_name == 'k_euler':
|
||||
sampler = KDiffusionSampler(server_state["model"],'euler')
|
||||
elif sampler_name == 'k_heun':
|
||||
sampler = KDiffusionSampler(server_state["model"],'heun')
|
||||
elif sampler_name == 'k_lms':
|
||||
sampler = KDiffusionSampler(server_state["model"],'lms')
|
||||
if not use_stable_horde:
|
||||
|
||||
if sampler_name == 'PLMS':
|
||||
sampler = PLMSSampler(server_state["model"])
|
||||
elif sampler_name == 'DDIM':
|
||||
sampler = DDIMSampler(server_state["model"])
|
||||
elif sampler_name == 'k_dpm_2_a':
|
||||
sampler = KDiffusionSampler(server_state["model"],'dpm_2_ancestral')
|
||||
elif sampler_name == 'k_dpm_2':
|
||||
sampler = KDiffusionSampler(server_state["model"],'dpm_2')
|
||||
elif sampler_name == 'k_euler_a':
|
||||
sampler = KDiffusionSampler(server_state["model"],'euler_ancestral')
|
||||
elif sampler_name == 'k_euler':
|
||||
sampler = KDiffusionSampler(server_state["model"],'euler')
|
||||
elif sampler_name == 'k_heun':
|
||||
sampler = KDiffusionSampler(server_state["model"],'heun')
|
||||
elif sampler_name == 'k_lms':
|
||||
sampler = KDiffusionSampler(server_state["model"],'lms')
|
||||
else:
|
||||
raise Exception("Unknown sampler: " + sampler_name)
|
||||
|
||||
def init():
|
||||
pass
|
||||
|
||||
def sample(init_data, x, conditioning, unconditional_conditioning, sampler_name):
|
||||
samples_ddim, _ = sampler.sample(S=ddim_steps, conditioning=conditioning, batch_size=int(x.shape[0]), shape=x[0].shape, verbose=False, unconditional_guidance_scale=cfg_scale,
|
||||
unconditional_conditioning=unconditional_conditioning, eta=ddim_eta, x_T=x,
|
||||
img_callback=generation_callback if not server_state["bridge"] else None,
|
||||
log_every_t=int(st.session_state.update_preview_frequency if not server_state["bridge"] else 100))
|
||||
|
||||
return samples_ddim
|
||||
|
||||
|
||||
if use_stable_horde:
|
||||
output_images, seed, info, stats = stable_horde(
|
||||
prompt=prompt,
|
||||
seed=seed,
|
||||
outpath=outpath,
|
||||
sampler_name=sampler_name,
|
||||
save_grid=save_grid,
|
||||
batch_size=batch_size,
|
||||
n_iter=n_iter,
|
||||
steps=ddim_steps,
|
||||
cfg_scale=cfg_scale,
|
||||
width=width,
|
||||
height=height,
|
||||
prompt_matrix=separate_prompts,
|
||||
use_GFPGAN=use_GFPGAN,
|
||||
GFPGAN_model=GFPGAN_model,
|
||||
use_RealESRGAN=use_RealESRGAN,
|
||||
realesrgan_model_name=RealESRGAN_model,
|
||||
use_LDSR=use_LDSR,
|
||||
LDSR_model_name=LDSR_model,
|
||||
ddim_eta=ddim_eta,
|
||||
normalize_prompt_weights=normalize_prompt_weights,
|
||||
save_individual_images=save_individual_images,
|
||||
sort_samples=group_by_prompt,
|
||||
write_info_files=write_info_files,
|
||||
jpg_sample=save_as_jpg,
|
||||
variant_amount=variant_amount,
|
||||
variant_seed=variant_seed,
|
||||
api_key=stable_horde_key
|
||||
)
|
||||
else:
|
||||
raise Exception("Unknown sampler: " + sampler_name)
|
||||
|
||||
def init():
|
||||
pass
|
||||
#try:
|
||||
output_images, seed, info, stats = process_images(
|
||||
outpath=outpath,
|
||||
func_init=init,
|
||||
func_sample=sample,
|
||||
prompt=prompt,
|
||||
seed=seed,
|
||||
sampler_name=sampler_name,
|
||||
save_grid=save_grid,
|
||||
batch_size=batch_size,
|
||||
n_iter=n_iter,
|
||||
steps=ddim_steps,
|
||||
cfg_scale=cfg_scale,
|
||||
width=width,
|
||||
height=height,
|
||||
prompt_matrix=separate_prompts,
|
||||
use_GFPGAN=use_GFPGAN,
|
||||
GFPGAN_model=GFPGAN_model,
|
||||
use_RealESRGAN=use_RealESRGAN,
|
||||
realesrgan_model_name=RealESRGAN_model,
|
||||
use_LDSR=use_LDSR,
|
||||
LDSR_model_name=LDSR_model,
|
||||
ddim_eta=ddim_eta,
|
||||
normalize_prompt_weights=normalize_prompt_weights,
|
||||
save_individual_images=save_individual_images,
|
||||
sort_samples=group_by_prompt,
|
||||
write_info_files=write_info_files,
|
||||
jpg_sample=save_as_jpg,
|
||||
variant_amount=variant_amount,
|
||||
variant_seed=variant_seed,
|
||||
)
|
||||
|
||||
def sample(init_data, x, conditioning, unconditional_conditioning, sampler_name):
|
||||
samples_ddim, _ = sampler.sample(S=ddim_steps, conditioning=conditioning, batch_size=int(x.shape[0]), shape=x[0].shape, verbose=False, unconditional_guidance_scale=cfg_scale,
|
||||
unconditional_conditioning=unconditional_conditioning, eta=ddim_eta, x_T=x, img_callback=generation_callback,
|
||||
log_every_t=int(st.session_state.update_preview_frequency))
|
||||
|
||||
return samples_ddim
|
||||
|
||||
#try:
|
||||
output_images, seed, info, stats = process_images(
|
||||
outpath=outpath,
|
||||
func_init=init,
|
||||
func_sample=sample,
|
||||
prompt=prompt,
|
||||
seed=seed,
|
||||
sampler_name=sampler_name,
|
||||
save_grid=save_grid,
|
||||
batch_size=batch_size,
|
||||
n_iter=n_iter,
|
||||
steps=ddim_steps,
|
||||
cfg_scale=cfg_scale,
|
||||
width=width,
|
||||
height=height,
|
||||
prompt_matrix=separate_prompts,
|
||||
use_GFPGAN=st.session_state["use_GFPGAN"],
|
||||
GFPGAN_model=st.session_state["GFPGAN_model"],
|
||||
use_RealESRGAN=st.session_state["use_RealESRGAN"],
|
||||
realesrgan_model_name=RealESRGAN_model,
|
||||
use_LDSR=st.session_state["use_LDSR"],
|
||||
LDSR_model_name=LDSR_model,
|
||||
ddim_eta=ddim_eta,
|
||||
normalize_prompt_weights=normalize_prompt_weights,
|
||||
save_individual_images=save_individual_images,
|
||||
sort_samples=group_by_prompt,
|
||||
write_info_files=write_info_files,
|
||||
jpg_sample=save_as_jpg,
|
||||
variant_amount=variant_amount,
|
||||
variant_seed=variant_seed,
|
||||
)
|
||||
|
||||
del sampler
|
||||
del sampler
|
||||
|
||||
return output_images, seed, info, stats
|
||||
|
||||
@ -175,6 +396,7 @@ def txt2img(prompt: str, ddim_steps: int, sampler_name: str, n_iter: int, batch_
|
||||
#return [], seed, 'err', stats
|
||||
|
||||
#
|
||||
@logger.catch(reraise=True)
|
||||
def layout():
|
||||
with st.form("txt2img-inputs"):
|
||||
st.session_state["generation_mode"] = "txt2img"
|
||||
@ -183,7 +405,9 @@ def layout():
|
||||
|
||||
with input_col1:
|
||||
#prompt = st.text_area("Input Text","")
|
||||
prompt = st.text_area("Input Text","", placeholder="A corgi wearing a top hat as an oil painting.")
|
||||
placeholder = "A corgi wearing a top hat as an oil painting."
|
||||
prompt = st.text_area("Input Text","", placeholder=placeholder, height=54)
|
||||
key_phrase_suggestions.suggestion_area(placeholder)
|
||||
|
||||
# creating the page layout using columns
|
||||
col1, col2, col3 = st.columns([1,2,1], gap="large")
|
||||
@ -193,10 +417,10 @@ def layout():
|
||||
value=st.session_state['defaults'].txt2img.width.value, step=st.session_state['defaults'].txt2img.width.step)
|
||||
height = st.slider("Height:", min_value=st.session_state['defaults'].txt2img.height.min_value, max_value=st.session_state['defaults'].txt2img.height.max_value,
|
||||
value=st.session_state['defaults'].txt2img.height.value, step=st.session_state['defaults'].txt2img.height.step)
|
||||
cfg_scale = st.slider("CFG (Classifier Free Guidance Scale):", min_value=st.session_state['defaults'].txt2img.cfg_scale.min_value,
|
||||
max_value=st.session_state['defaults'].txt2img.cfg_scale.max_value,
|
||||
cfg_scale = st.number_input("CFG (Classifier Free Guidance Scale):", min_value=st.session_state['defaults'].txt2img.cfg_scale.min_value,
|
||||
value=st.session_state['defaults'].txt2img.cfg_scale.value, step=st.session_state['defaults'].txt2img.cfg_scale.step,
|
||||
help="How strongly the image should follow the prompt.")
|
||||
|
||||
seed = st.text_input("Seed:", value=st.session_state['defaults'].txt2img.seed, help=" The seed to use, if left blank a random seed will be generated.")
|
||||
|
||||
with st.expander("Batch Options"):
|
||||
@ -264,7 +488,7 @@ def layout():
|
||||
help="Select the model you want to use. This option is only available if you have custom models \
|
||||
on your 'models/custom' folder. The model name that will be shown here is the same as the name\
|
||||
the file for the model has on said folder, it is recommended to give the .ckpt file a name that \
|
||||
will make it easier for you to distinguish it from other models. Default: Stable Diffusion v1.4")
|
||||
will make it easier for you to distinguish it from other models. Default: Stable Diffusion v1.5")
|
||||
|
||||
st.session_state.sampling_steps = st.number_input("Sampling Steps", value=st.session_state.defaults.txt2img.sampling_steps.value,
|
||||
min_value=st.session_state.defaults.txt2img.sampling_steps.min_value,
|
||||
@ -276,6 +500,11 @@ def layout():
|
||||
index=sampler_name_list.index(st.session_state['defaults'].txt2img.default_sampler), help="Sampling method to use. Default: k_euler")
|
||||
|
||||
with st.expander("Advanced"):
|
||||
with st.expander("Stable Horde"):
|
||||
use_stable_horde = st.checkbox("Use Stable Horde", value=False, help="Use the Stable Horde to generate images. More info can be found at https://stablehorde.net/")
|
||||
stable_horde_key = st.text_input("Stable Horde Api Key", value='', type="password",
|
||||
help="Optional Api Key used for the Stable Horde Bridge, if no api key is added the horde will be used anonymously.")
|
||||
|
||||
with st.expander("Output Settings"):
|
||||
separate_prompts = st.checkbox("Create Prompt Matrix.", value=st.session_state['defaults'].txt2img.separate_prompts,
|
||||
help="Separate multiple prompts using the `|` character, and get all combinations of them.")
|
||||
@ -403,12 +632,12 @@ def layout():
|
||||
if generate_button:
|
||||
|
||||
with col2:
|
||||
with hc.HyLoader('Loading Models...', hc.Loaders.standard_loaders,index=[0]):
|
||||
load_models(use_LDSR=st.session_state["use_LDSR"], LDSR_model=st.session_state["LDSR_model"],
|
||||
use_GFPGAN=st.session_state["use_GFPGAN"], GFPGAN_model=st.session_state["GFPGAN_model"] ,
|
||||
use_RealESRGAN=st.session_state["use_RealESRGAN"], RealESRGAN_model=st.session_state["RealESRGAN_model"],
|
||||
CustomModel_available=server_state["CustomModel_available"], custom_model=st.session_state["custom_model"])
|
||||
|
||||
if not use_stable_horde:
|
||||
with hc.HyLoader('Loading Models...', hc.Loaders.standard_loaders,index=[0]):
|
||||
load_models(use_LDSR=st.session_state["use_LDSR"], LDSR_model=st.session_state["LDSR_model"],
|
||||
use_GFPGAN=st.session_state["use_GFPGAN"], GFPGAN_model=st.session_state["GFPGAN_model"] ,
|
||||
use_RealESRGAN=st.session_state["use_RealESRGAN"], RealESRGAN_model=st.session_state["RealESRGAN_model"],
|
||||
CustomModel_available=server_state["CustomModel_available"], custom_model=st.session_state["custom_model"])
|
||||
|
||||
#print(st.session_state['use_RealESRGAN'])
|
||||
#print(st.session_state['use_LDSR'])
|
||||
@ -420,7 +649,8 @@ def layout():
|
||||
save_grid, group_by_prompt, save_as_jpg, st.session_state["use_GFPGAN"], st.session_state['GFPGAN_model'],
|
||||
use_RealESRGAN=st.session_state["use_RealESRGAN"], RealESRGAN_model=st.session_state["RealESRGAN_model"],
|
||||
use_LDSR=st.session_state["use_LDSR"], LDSR_model=st.session_state["LDSR_model"],
|
||||
variant_amount=variant_amount, variant_seed=variant_seed, write_info_files=write_info_files)
|
||||
variant_amount=variant_amount, variant_seed=variant_seed, write_info_files=write_info_files,
|
||||
use_stable_horde=use_stable_horde, stable_horde_key=stable_horde_key)
|
||||
|
||||
message.success('Render Complete: ' + info + '; Stats: ' + stats, icon="✅")
|
||||
|
||||
@ -459,7 +689,7 @@ def layout():
|
||||
#st.session_state['historyTab'] = [history_tab,col1,col2,col3,PlaceHolder,col1_cont,col2_cont,col3_cont]
|
||||
|
||||
with gallery_tab:
|
||||
print(seeds)
|
||||
logger.info(seeds)
|
||||
sdGallery(output_images)
|
||||
|
||||
|
||||
|
@ -14,6 +14,13 @@
|
||||
# You should have received a copy of the GNU Affero General Public License
|
||||
# along with this program. If not, see <http://www.gnu.org/licenses/>.
|
||||
# base webui import and utils.
|
||||
|
||||
"""
|
||||
Implementation of Text to Video based on the
|
||||
https://github.com/nateraw/stable-diffusion-videos
|
||||
repo and the original gist script from
|
||||
https://gist.github.com/karpathy/00103b0037c5aaea32fe1da1af553355
|
||||
"""
|
||||
from sd_utils import *
|
||||
|
||||
# streamlit imports
|
||||
@ -25,7 +32,7 @@ from streamlit_server_state import server_state, server_state_lock
|
||||
|
||||
#other imports
|
||||
|
||||
import os
|
||||
import os, sys
|
||||
from PIL import Image
|
||||
import torch
|
||||
import numpy as np
|
||||
@ -40,11 +47,16 @@ from diffusers import StableDiffusionPipeline
|
||||
from diffusers.schedulers import DDIMScheduler, LMSDiscreteScheduler, \
|
||||
PNDMScheduler
|
||||
|
||||
# streamlit components
|
||||
from custom_components import key_phrase_suggestions
|
||||
|
||||
# Temp imports
|
||||
|
||||
# end of imports
|
||||
#---------------------------------------------------------------------------------------------------------------
|
||||
|
||||
key_phrase_suggestions.init()
|
||||
|
||||
try:
|
||||
# this silences the annoying "Some weights of the model checkpoint were not used when initializing..." message at start.
|
||||
from transformers import logging
|
||||
@ -190,13 +202,16 @@ def diffuse(
|
||||
frames_percent = int(100 * float(st.session_state.current_frame if st.session_state.current_frame < st.session_state.max_frames else st.session_state.max_frames)/float(
|
||||
st.session_state.max_frames))
|
||||
|
||||
st.session_state["progress_bar_text"].text(
|
||||
f"Running step: {i+1 if i+1 < st.session_state.sampling_steps else st.session_state.sampling_steps}/{st.session_state.sampling_steps} "
|
||||
if "progress_bar_text" in st.session_state:
|
||||
st.session_state["progress_bar_text"].text(
|
||||
f"Running step: {i+1 if i+1 < st.session_state.sampling_steps else st.session_state.sampling_steps}/{st.session_state.sampling_steps} "
|
||||
f"{percent if percent < 100 else 100}% {inference_progress}{duration:.2f}{speed} | "
|
||||
f"Frame: {st.session_state.current_frame + 1 if st.session_state.current_frame < st.session_state.max_frames else st.session_state.max_frames}/{st.session_state.max_frames} "
|
||||
f"{frames_percent if frames_percent < 100 else 100}% {st.session_state.frame_duration:.2f}{st.session_state.frame_speed}"
|
||||
)
|
||||
st.session_state["progress_bar"].progress(percent if percent < 100 else 100)
|
||||
)
|
||||
|
||||
if "progress_bar" in st.session_state:
|
||||
st.session_state["progress_bar"].progress(percent if percent < 100 else 100)
|
||||
|
||||
except KeyError:
|
||||
raise StopException
|
||||
@ -224,14 +239,21 @@ def load_diffusers_model(weights_path,torch_device):
|
||||
try:
|
||||
with server_state_lock["pipe"]:
|
||||
if "pipe" not in server_state:
|
||||
if ("weights_path" in st.session_state) and st.session_state["weights_path"] != weights_path:
|
||||
if "weights_path" in st.session_state and st.session_state["weights_path"] != weights_path:
|
||||
del st.session_state["weights_path"]
|
||||
|
||||
st.session_state["weights_path"] = weights_path
|
||||
# if folder "models/diffusers/stable-diffusion-v1-4" exists, load the model from there
|
||||
server_state['float16'] = st.session_state['defaults'].general.use_float16
|
||||
server_state['no_half'] = st.session_state['defaults'].general.no_half
|
||||
server_state['optimized'] = st.session_state['defaults'].general.optimized
|
||||
|
||||
#if folder "models/diffusers/stable-diffusion-v1-4" exists, load the model from there
|
||||
if weights_path == "CompVis/stable-diffusion-v1-4":
|
||||
model_path = os.path.join("models", "diffusers", "stable-diffusion-v1-4")
|
||||
|
||||
if weights_path == "runwayml/stable-diffusion-v1-5":
|
||||
model_path = os.path.join("models", "diffusers", "stable-diffusion-v1-5")
|
||||
|
||||
if not os.path.exists(model_path + "/model_index.json"):
|
||||
server_state["pipe"] = StableDiffusionPipeline.from_pretrained(
|
||||
weights_path,
|
||||
@ -259,14 +281,40 @@ def load_diffusers_model(weights_path,torch_device):
|
||||
if st.session_state.defaults.general.enable_minimal_memory_usage:
|
||||
server_state["pipe"].enable_minimal_memory_usage()
|
||||
|
||||
print("Tx2Vid Model Loaded")
|
||||
logger.info("Tx2Vid Model Loaded")
|
||||
else:
|
||||
print("Tx2Vid Model already Loaded")
|
||||
except (EnvironmentError, OSError):
|
||||
st.session_state["progress_bar_text"].error(
|
||||
"You need a huggingface token in order to use the Text to Video tab. Use the Settings page from the sidebar on the left to add your token."
|
||||
)
|
||||
raise OSError("You need a huggingface token in order to use the Text to Video tab. Use the Settings page from the sidebar on the left to add your token.")
|
||||
# if the float16 or no_half options have changed since the last time the model was loaded then we need to reload the model.
|
||||
if ("float16" in server_state and server_state['float16'] != st.session_state['defaults'].general.use_float16) \
|
||||
or ("no_half" in server_state and server_state['no_half'] != st.session_state['defaults'].general.no_half) \
|
||||
or ("optimized" in server_state and server_state['optimized'] != st.session_state['defaults'].general.optimized):
|
||||
|
||||
del server_state['float16']
|
||||
del server_state['no_half']
|
||||
with server_state_lock["pipe"]:
|
||||
del server_state["pipe"]
|
||||
torch_gc()
|
||||
|
||||
del server_state['optimized']
|
||||
|
||||
server_state['float16'] = st.session_state['defaults'].general.use_float16
|
||||
server_state['no_half'] = st.session_state['defaults'].general.no_half
|
||||
server_state['optimized'] = st.session_state['defaults'].general.optimized
|
||||
|
||||
load_diffusers_model(weights_path, torch_device)
|
||||
else:
|
||||
logger.info("Tx2Vid Model already Loaded")
|
||||
|
||||
except (EnvironmentError, OSError) as e:
|
||||
if "huggingface_token" not in st.session_state or st.session_state["defaults"].general.huggingface_token == "None":
|
||||
if "progress_bar_text" in st.session_state:
|
||||
st.session_state["progress_bar_text"].error(
|
||||
"You need a huggingface token in order to use the Text to Video tab. Use the Settings page from the sidebar on the left to add your token."
|
||||
)
|
||||
raise OSError("You need a huggingface token in order to use the Text to Video tab. Use the Settings page from the sidebar on the left to add your token.")
|
||||
else:
|
||||
if "progress_bar_text" in st.session_state:
|
||||
st.session_state["progress_bar_text"].error(e)
|
||||
|
||||
#
|
||||
def save_video_to_disk(frames, seeds, sanitized_prompt, fps=6,save_video=True, outdir='outputs'):
|
||||
if save_video:
|
||||
@ -306,7 +354,7 @@ def txt2vid(
|
||||
eta:float = 0.0,
|
||||
width:int = 256,
|
||||
height:int = 256,
|
||||
weights_path = "CompVis/stable-diffusion-v1-4",
|
||||
weights_path = "runwayml/stable-diffusion-v1-5",
|
||||
scheduler="klms", # choices: default, ddim, klms
|
||||
disable_tqdm = False,
|
||||
#-----------------------------------------------
|
||||
@ -331,7 +379,7 @@ def txt2vid(
|
||||
eta:float = 0.0,
|
||||
width:int = 256,
|
||||
height:int = 256,
|
||||
weights_path = "CompVis/stable-diffusion-v1-4",
|
||||
weights_path = "runwayml/stable-diffusion-v1-5",
|
||||
scheduler="klms", # choices: default, ddim, klms
|
||||
disable_tqdm = False,
|
||||
beta_start = 0.0001,
|
||||
@ -413,17 +461,12 @@ def txt2vid(
|
||||
|
||||
SCHEDULERS = dict(default=default_scheduler, ddim=ddim_scheduler, klms=klms_scheduler)
|
||||
|
||||
if "pipe" not in server_state:
|
||||
with st.session_state["progress_bar_text"].container():
|
||||
with hc.HyLoader('Loading Models...', hc.Loaders.standard_loaders,index=[0]):
|
||||
if "model" in st.session_state:
|
||||
del st.session_state["model"]
|
||||
load_diffusers_model(weights_path, torch_device)
|
||||
else:
|
||||
print("Model already loaded")
|
||||
with st.session_state["progress_bar_text"].container():
|
||||
with hc.HyLoader('Loading Models...', hc.Loaders.standard_loaders,index=[0]):
|
||||
load_diffusers_model(weights_path, torch_device)
|
||||
|
||||
if "pipe" not in server_state:
|
||||
print('wtf')
|
||||
logger.error('wtf')
|
||||
|
||||
server_state["pipe"].scheduler = SCHEDULERS[scheduler]
|
||||
|
||||
@ -481,28 +524,32 @@ def txt2vid(
|
||||
frames = []
|
||||
frame_index = 0
|
||||
|
||||
second_count = 1
|
||||
|
||||
st.session_state["total_frames_avg_duration"] = []
|
||||
st.session_state["total_frames_avg_speed"] = []
|
||||
|
||||
try:
|
||||
while frame_index < max_frames:
|
||||
while second_count < max_frames:
|
||||
st.session_state["frame_duration"] = 0
|
||||
st.session_state["frame_speed"] = 0
|
||||
st.session_state["current_frame"] = frame_index
|
||||
|
||||
#print(f"Second: {second_count+1}/{max_frames}")
|
||||
|
||||
# sample the destination
|
||||
init2 = torch.randn((1, server_state["pipe"].unet.in_channels, height // 8, width // 8), device=torch_device)
|
||||
|
||||
for i, t in enumerate(np.linspace(0, 1, num_steps)):
|
||||
start = timeit.default_timer()
|
||||
print(f"COUNT: {frame_index+1}/{max_frames}")
|
||||
logger.info(f"COUNT: {frame_index+1}/{max_frames}")
|
||||
|
||||
#if use_lerp_for_text:
|
||||
#init = torch.lerp(init1, init2, float(t))
|
||||
#else:
|
||||
#init = slerp(gpu, float(t), init1, init2)
|
||||
if use_lerp_for_text:
|
||||
init = torch.lerp(init1, init2, float(t))
|
||||
else:
|
||||
init = slerp(gpu, float(t), init1, init2)
|
||||
|
||||
init = slerp(gpu, float(t), init1, init2)
|
||||
#init = slerp(gpu, float(t), init1, init2)
|
||||
|
||||
with autocast("cuda"):
|
||||
image = diffuse(server_state["pipe"], cond_embeddings, init, num_inference_steps, cfg_scale, eta)
|
||||
@ -524,7 +571,8 @@ def txt2vid(
|
||||
#if st.session_state["use_GFPGAN"] and server_state["GFPGAN"] is not None and not st.session_state["use_RealESRGAN"]:
|
||||
if st.session_state["use_GFPGAN"] and server_state["GFPGAN"] is not None:
|
||||
#print("Running GFPGAN on image ...")
|
||||
st.session_state["progress_bar_text"].text("Running GFPGAN on image ...")
|
||||
if "progress_bar_text" in st.session_state:
|
||||
st.session_state["progress_bar_text"].text("Running GFPGAN on image ...")
|
||||
#skip_save = True # #287 >_>
|
||||
torch_gc()
|
||||
cropped_faces, restored_faces, restored_img = server_state["GFPGAN"].enhance(np.array(image)[:,:,::-1], has_aligned=False, only_center_face=False, paste_back=True)
|
||||
@ -539,7 +587,7 @@ def txt2vid(
|
||||
try:
|
||||
st.session_state["preview_image"].image(gfpgan_image)
|
||||
except KeyError:
|
||||
print ("Cant get session_state, skipping image preview.")
|
||||
logger.error ("Cant get session_state, skipping image preview.")
|
||||
#except (AttributeError, KeyError):
|
||||
#print("Cant perform GFPGAN, skipping.")
|
||||
|
||||
@ -566,7 +614,7 @@ def txt2vid(
|
||||
|
||||
except StopException:
|
||||
if save_video_on_stop:
|
||||
print ("Streamlit Stop Exception Received. Saving video")
|
||||
logger.info("Streamlit Stop Exception Received. Saving video")
|
||||
video_path = save_video_to_disk(frames, seeds, sanitized_prompt, save_video=save_video, outdir=outdir)
|
||||
else:
|
||||
video_path = None
|
||||
@ -596,7 +644,9 @@ def layout():
|
||||
input_col1, generate_col1 = st.columns([10,1])
|
||||
with input_col1:
|
||||
#prompt = st.text_area("Input Text","")
|
||||
prompt = st.text_area("Input Text","", placeholder="A corgi wearing a top hat as an oil painting.")
|
||||
placeholder = "A corgi wearing a top hat as an oil painting."
|
||||
prompt = st.text_area("Input Text","", placeholder=placeholder, height=54)
|
||||
key_phrase_suggestions.suggestion_area(placeholder)
|
||||
|
||||
# Every form must have a submit button, the extra blank spaces is a temp way to align it with the input field. Needs to be done in CSS or some other way.
|
||||
generate_col1.write("")
|
||||
@ -611,9 +661,10 @@ def layout():
|
||||
value=st.session_state['defaults'].txt2vid.width.value, step=st.session_state['defaults'].txt2vid.width.step)
|
||||
height = st.slider("Height:", min_value=st.session_state['defaults'].txt2vid.height.min_value, max_value=st.session_state['defaults'].txt2vid.height.max_value,
|
||||
value=st.session_state['defaults'].txt2vid.height.value, step=st.session_state['defaults'].txt2vid.height.step)
|
||||
cfg_scale = st.slider("CFG (Classifier Free Guidance Scale):", min_value=st.session_state['defaults'].txt2vid.cfg_scale.min_value,
|
||||
max_value=st.session_state['defaults'].txt2vid.cfg_scale.max_value, value=st.session_state['defaults'].txt2vid.cfg_scale.value,
|
||||
step=st.session_state['defaults'].txt2vid.cfg_scale.step, help="How strongly the image should follow the prompt.")
|
||||
cfg_scale = st.number_input("CFG (Classifier Free Guidance Scale):", min_value=st.session_state['defaults'].txt2vid.cfg_scale.min_value,
|
||||
value=st.session_state['defaults'].txt2vid.cfg_scale.value,
|
||||
step=st.session_state['defaults'].txt2vid.cfg_scale.step,
|
||||
help="How strongly the image should follow the prompt.")
|
||||
|
||||
#uploaded_images = st.file_uploader("Upload Image", accept_multiple_files=False, type=["png", "jpg", "jpeg", "webp"],
|
||||
#help="Upload an image which will be used for the image to image generation.")
|
||||
@ -686,13 +737,13 @@ def layout():
|
||||
help="Select the model you want to use. This option is only available if you have custom models \
|
||||
on your 'models/custom' folder. The model name that will be shown here is the same as the name\
|
||||
the file for the model has on said folder, it is recommended to give the .ckpt file a name that \
|
||||
will make it easier for you to distinguish it from other models. Default: Stable Diffusion v1.4")
|
||||
will make it easier for you to distinguish it from other models. Default: Stable Diffusion v1.5")
|
||||
else:
|
||||
custom_model = "CompVis/stable-diffusion-v1-4"
|
||||
custom_model = "runwayml/stable-diffusion-v1-5"
|
||||
|
||||
#st.session_state["weights_path"] = custom_model
|
||||
#else:
|
||||
#custom_model = "CompVis/stable-diffusion-v1-4"
|
||||
#custom_model = "runwayml/stable-diffusion-v1-5"
|
||||
#st.session_state["weights_path"] = f"CompVis/{slugify(custom_model.lower())}"
|
||||
|
||||
st.session_state.sampling_steps = st.number_input("Sampling Steps", value=st.session_state['defaults'].txt2vid.sampling_steps.value,
|
||||
@ -745,8 +796,14 @@ def layout():
|
||||
|
||||
st.session_state["write_info_files"] = st.checkbox("Write Info file", value=st.session_state['defaults'].txt2vid.write_info_files,
|
||||
help="Save a file next to the image with informartion about the generation.")
|
||||
st.session_state["do_loop"] = st.checkbox("Do Loop", value=st.session_state['defaults'].txt2vid.do_loop,
|
||||
help="Do loop")
|
||||
|
||||
#st.session_state["do_loop"] = st.checkbox("Do Loop", value=st.session_state['defaults'].txt2vid.do_loop, help="Do loop")
|
||||
st.session_state["use_lerp_for_text"] = st.checkbox("Use Lerp Instead of Slerp", value=st.session_state['defaults'].txt2vid.use_lerp_for_text,
|
||||
help="Uses torch.lerp() instead of slerp. When interpolating between related prompts. \
|
||||
e.g. 'a lion in a grassy meadow' -> 'a bear in a grassy meadow' tends to keep the meadow \
|
||||
the whole way through when lerped, but slerping will often find a path where the meadow \
|
||||
disappears in the middle")
|
||||
|
||||
st.session_state["save_as_jpg"] = st.checkbox("Save samples as jpg", value=st.session_state['defaults'].txt2vid.save_as_jpg, help="Saves the images as jpg instead of png.")
|
||||
|
||||
#
|
||||
@ -861,7 +918,7 @@ def layout():
|
||||
|
||||
if st.session_state["use_GFPGAN"]:
|
||||
if "GFPGAN" in server_state:
|
||||
print("GFPGAN already loaded")
|
||||
logger.info("GFPGAN already loaded")
|
||||
else:
|
||||
with col2:
|
||||
with hc.HyLoader('Loading Models...', hc.Loaders.standard_loaders,index=[0]):
|
||||
@ -869,11 +926,11 @@ def layout():
|
||||
if os.path.exists(st.session_state["defaults"].general.GFPGAN_dir):
|
||||
try:
|
||||
load_GFPGAN()
|
||||
print("Loaded GFPGAN")
|
||||
logger.info("Loaded GFPGAN")
|
||||
except Exception:
|
||||
import traceback
|
||||
print("Error loading GFPGAN:", file=sys.stderr)
|
||||
print(traceback.format_exc(), file=sys.stderr)
|
||||
logger.error("Error loading GFPGAN:", file=sys.stderr)
|
||||
logger.error(traceback.format_exc(), file=sys.stderr)
|
||||
else:
|
||||
if "GFPGAN" in server_state:
|
||||
del server_state["GFPGAN"]
|
||||
@ -885,7 +942,8 @@ def layout():
|
||||
num_inference_steps=st.session_state.num_inference_steps,
|
||||
cfg_scale=cfg_scale, save_video_on_stop=save_video_on_stop,
|
||||
outdir=st.session_state["defaults"].general.outdir,
|
||||
do_loop=st.session_state["do_loop"],
|
||||
#do_loop=st.session_state["do_loop"],
|
||||
use_lerp_for_text=st.session_state["use_lerp_for_text"],
|
||||
seeds=seed, quality=100, eta=0.0, width=width,
|
||||
height=height, weights_path=custom_model, scheduler=scheduler_name,
|
||||
disable_tqdm=False, beta_start=st.session_state['defaults'].txt2vid.beta_start.value,
|
||||
|
@ -2783,22 +2783,33 @@ if __name__ == '__main__':
|
||||
if opt.bridge:
|
||||
try:
|
||||
import bridgeData as cd
|
||||
except:
|
||||
except ModuleNotFoundError as e:
|
||||
logger.warning("No bridgeData found. Falling back to default where no CLI args are set.")
|
||||
logger.warning(str(e))
|
||||
except SyntaxError as e:
|
||||
logger.warning("bridgeData found, but is malformed. Falling back to default where no CLI args are set.")
|
||||
logger.warning(str(e))
|
||||
except Exception as e:
|
||||
logger.warning("No bridgeData found, use default where no CLI args are set")
|
||||
class temp(object):
|
||||
def __init__(self):
|
||||
random.seed()
|
||||
self.horde_url = "https://stablehorde.net"
|
||||
# Give a cool name to your instance
|
||||
self.horde_name = f"Automated Instance #{random.randint(-100000000, 100000000)}"
|
||||
# The api_key identifies a unique user in the horde
|
||||
self.horde_api_key = "0000000000"
|
||||
# Put other users whose prompts you want to prioritize.
|
||||
# The owner's username is always included so you don't need to add it here, unless you want it to have lower priority than another user
|
||||
self.horde_priority_usernames = []
|
||||
self.horde_max_power = 8
|
||||
self.nsfw = True
|
||||
cd = temp()
|
||||
logger.warning(str(e))
|
||||
finally:
|
||||
try: # check if cd exists (i.e. bridgeData loaded properly)
|
||||
cd
|
||||
except: # if not, create defaults
|
||||
class temp(object):
|
||||
def __init__(self):
|
||||
random.seed()
|
||||
self.horde_url = "https://stablehorde.net"
|
||||
# Give a cool name to your instance
|
||||
self.horde_name = f"Automated Instance #{random.randint(-100000000, 100000000)}"
|
||||
# The api_key identifies a unique user in the horde
|
||||
self.horde_api_key = "0000000000"
|
||||
# Put other users whose prompts you want to prioritize.
|
||||
# The owner's username is always included so you don't need to add it here, unless you want it to have lower priority than another user
|
||||
self.horde_priority_usernames = []
|
||||
self.horde_max_power = 8
|
||||
self.nsfw = True
|
||||
cd = temp()
|
||||
horde_api_key = opt.horde_api_key if opt.horde_api_key else cd.horde_api_key
|
||||
horde_name = opt.horde_name if opt.horde_name else cd.horde_name
|
||||
horde_url = opt.horde_url if opt.horde_url else cd.horde_url
|
||||
|
@ -12,21 +12,22 @@
|
||||
# GNU Affero General Public License for more details.
|
||||
|
||||
# You should have received a copy of the GNU Affero General Public License
|
||||
# along with this program. If not, see <http://www.gnu.org/licenses/>.
|
||||
# along with this program. If not, see <http://www.gnu.org/licenses/>.
|
||||
|
||||
# base webui import and utils.
|
||||
#import streamlit as st
|
||||
|
||||
# We import hydralit like this to replace the previous stuff
|
||||
# we had with native streamlit as it lets ur replace things 1:1
|
||||
#import hydralit as st
|
||||
#import hydralit as st
|
||||
import collections.abc
|
||||
from sd_utils import *
|
||||
|
||||
# streamlit imports
|
||||
import streamlit_nested_layout
|
||||
|
||||
#streamlit components section
|
||||
from st_on_hover_tabs import on_hover_tabs
|
||||
#from st_on_hover_tabs import on_hover_tabs
|
||||
from streamlit_server_state import server_state, server_state_lock
|
||||
|
||||
#other imports
|
||||
@ -35,38 +36,55 @@ import warnings
|
||||
import os, toml
|
||||
import k_diffusion as K
|
||||
from omegaconf import OmegaConf
|
||||
import argparse
|
||||
|
||||
if not "defaults" in st.session_state:
|
||||
st.session_state["defaults"] = {}
|
||||
|
||||
st.session_state["defaults"] = OmegaConf.load("configs/webui/webui_streamlit.yaml")
|
||||
|
||||
if (os.path.exists("configs/webui/userconfig_streamlit.yaml")):
|
||||
user_defaults = OmegaConf.load("configs/webui/userconfig_streamlit.yaml")
|
||||
st.session_state["defaults"] = OmegaConf.merge(st.session_state["defaults"], user_defaults)
|
||||
else:
|
||||
OmegaConf.save(config=st.session_state.defaults, f="configs/webui/userconfig_streamlit.yaml")
|
||||
loaded = OmegaConf.load("configs/webui/userconfig_streamlit.yaml")
|
||||
assert st.session_state.defaults == loaded
|
||||
|
||||
if (os.path.exists(".streamlit/config.toml")):
|
||||
st.session_state["streamlit_config"] = toml.load(".streamlit/config.toml")
|
||||
# import custom components
|
||||
from custom_components import draggable_number_input
|
||||
|
||||
# end of imports
|
||||
#---------------------------------------------------------------------------------------------------------------
|
||||
|
||||
load_configs()
|
||||
|
||||
help = """
|
||||
A double dash (`--`) is used to separate streamlit arguments from app arguments.
|
||||
As a result using "streamlit run webui_streamlit.py --headless"
|
||||
will show the help for streamlit itself and not pass any argument to our app,
|
||||
we need to use "streamlit run webui_streamlit.py -- --headless"
|
||||
in order to pass a command argument to this app."""
|
||||
parser = argparse.ArgumentParser(formatter_class=argparse.ArgumentDefaultsHelpFormatter)
|
||||
|
||||
parser.add_argument("--headless", action='store_true', help="Don't launch web server, util if you just want to run the stable horde bridge.", default=False)
|
||||
|
||||
parser.add_argument("--bridge", action='store_true', help="don't launch web server, but make this instance into a Horde bridge.", default=False)
|
||||
parser.add_argument('--horde_api_key', action="store", required=False, type=str, help="The API key corresponding to the owner of this Horde instance")
|
||||
parser.add_argument('--horde_name', action="store", required=False, type=str, help="The server name for the Horde. It will be shown to the world and there can be only one.")
|
||||
parser.add_argument('--horde_url', action="store", required=False, type=str, help="The SH Horde URL. Where the bridge will pickup prompts and send the finished generations.")
|
||||
parser.add_argument('--horde_priority_usernames',type=str, action='append', required=False, help="Usernames which get priority use in this horde instance. The owner's username is always in this list.")
|
||||
parser.add_argument('--horde_max_power',type=int, required=False, help="How much power this instance has to generate pictures. Min: 2")
|
||||
parser.add_argument('--horde_sfw', action='store_true', required=False, help="Set to true if you do not want this worker generating NSFW images.")
|
||||
parser.add_argument('--horde_blacklist', nargs='+', required=False, help="List the words that you want to blacklist.")
|
||||
parser.add_argument('--horde_censorlist', nargs='+', required=False, help="List the words that you want to censor.")
|
||||
parser.add_argument('--horde_censor_nsfw', action='store_true', required=False, help="Set to true if you want this bridge worker to censor NSFW images.")
|
||||
parser.add_argument('--horde_model', action='store', required=False, help="Which model to run on this horde.")
|
||||
parser.add_argument('-v', '--verbosity', action='count', default=0, help="The default logging level is ERROR or higher. This value increases the amount of logging seen in your screen")
|
||||
parser.add_argument('-q', '--quiet', action='count', default=0, help="The default logging level is ERROR or higher. This value decreases the amount of logging seen in your screen")
|
||||
opt = parser.parse_args()
|
||||
|
||||
with server_state_lock["bridge"]:
|
||||
server_state["bridge"] = opt.bridge
|
||||
|
||||
try:
|
||||
# this silences the annoying "Some weights of the model checkpoint were not used when initializing..." message at start.
|
||||
from transformers import logging
|
||||
# this silences the annoying "Some weights of the model checkpoint were not used when initializing..." message at start.
|
||||
from transformers import logging
|
||||
|
||||
logging.set_verbosity_error()
|
||||
logging.set_verbosity_error()
|
||||
except:
|
||||
pass
|
||||
pass
|
||||
|
||||
# remove some annoying deprecation warnings that show every now and then.
|
||||
warnings.filterwarnings("ignore", category=DeprecationWarning)
|
||||
warnings.filterwarnings("ignore", category=UserWarning)
|
||||
warnings.filterwarnings("ignore", category=UserWarning)
|
||||
|
||||
# this should force GFPGAN and RealESRGAN onto the selected gpu as well
|
||||
#os.environ["CUDA_DEVICE_ORDER"]="PCI_BUS_ID" # see issue #152
|
||||
@ -74,104 +92,228 @@ warnings.filterwarnings("ignore", category=UserWarning)
|
||||
|
||||
|
||||
# functions to load css locally OR remotely starts here. Options exist for future flexibility. Called as st.markdown with unsafe_allow_html as css injection
|
||||
# TODO, maybe look into async loading the file especially for remote fetching
|
||||
# TODO, maybe look into async loading the file especially for remote fetching
|
||||
def local_css(file_name):
|
||||
with open(file_name) as f:
|
||||
st.markdown(f'<style>{f.read()}</style>', unsafe_allow_html=True)
|
||||
with open(file_name) as f:
|
||||
st.markdown(f'<style>{f.read()}</style>', unsafe_allow_html=True)
|
||||
|
||||
def remote_css(url):
|
||||
st.markdown(f'<link href="{url}" rel="stylesheet">', unsafe_allow_html=True)
|
||||
st.markdown(f'<link href="{url}" rel="stylesheet">', unsafe_allow_html=True)
|
||||
|
||||
def load_css(isLocal, nameOrURL):
|
||||
if(isLocal):
|
||||
local_css(nameOrURL)
|
||||
else:
|
||||
remote_css(nameOrURL)
|
||||
if(isLocal):
|
||||
local_css(nameOrURL)
|
||||
else:
|
||||
remote_css(nameOrURL)
|
||||
|
||||
@logger.catch(reraise=True)
|
||||
def layout():
|
||||
"""Layout functions to define all the streamlit layout here."""
|
||||
st.set_page_config(page_title="Stable Diffusion Playground", layout="wide")
|
||||
#app = st.HydraApp(title='Stable Diffusion WebUI', favicon="", sidebar_state="expanded",
|
||||
#hide_streamlit_markers=False, allow_url_nav=True , clear_cross_app_sessions=False)
|
||||
"""Layout functions to define all the streamlit layout here."""
|
||||
if not st.session_state["defaults"].debug.enable_hydralit:
|
||||
st.set_page_config(page_title="Stable Diffusion Playground", layout="wide", initial_sidebar_state="collapsed")
|
||||
|
||||
with st.empty():
|
||||
# load css as an external file, function has an option to local or remote url. Potential use when running from cloud infra that might not have access to local path.
|
||||
load_css(True, 'frontend/css/streamlit.main.css')
|
||||
|
||||
# check if the models exist on their respective folders
|
||||
with server_state_lock["GFPGAN_available"]:
|
||||
if os.path.exists(os.path.join(st.session_state["defaults"].general.GFPGAN_dir, f"{st.session_state['defaults'].general.GFPGAN_model}.pth")):
|
||||
server_state["GFPGAN_available"] = True
|
||||
else:
|
||||
server_state["GFPGAN_available"] = False
|
||||
#app = st.HydraApp(title='Stable Diffusion WebUI', favicon="", sidebar_state="expanded", layout="wide",
|
||||
#hide_streamlit_markers=False, allow_url_nav=True , clear_cross_app_sessions=False)
|
||||
|
||||
with st.empty():
|
||||
# load css as an external file, function has an option to local or remote url. Potential use when running from cloud infra that might not have access to local path.
|
||||
load_css(True, 'frontend/css/streamlit.main.css')
|
||||
|
||||
#
|
||||
# specify the primary menu definition
|
||||
menu_data = [
|
||||
{'id': 'Stable Diffusion', 'label': 'Stable Diffusion', 'icon': 'bi bi-grid-1x2-fill'},
|
||||
{'id': 'Textual Inversion', 'label': 'Textual Inversion', 'icon': 'bi bi-lightbulb-fill'},
|
||||
{'id': 'Model Manager', 'label': 'Model Manager', 'icon': 'bi bi-cloud-arrow-down-fill'},
|
||||
#{'id': 'Tools','label':"Tools", 'icon': "bi bi-tools", 'submenu':[
|
||||
{'id': 'API Server', 'label': 'API Server', 'icon': 'bi bi-server'},
|
||||
{'id': 'Settings', 'label': 'Settings', 'icon': 'bi bi-gear-fill'},
|
||||
#{'icon': "fa-solid fa-radar",'label':"Dropdown1", 'submenu':[
|
||||
# {'id':' subid11','icon': "fa fa-paperclip", 'label':"Sub-item 1"},{'id':'subid12','icon': "💀", 'label':"Sub-item 2"},{'id':'subid13','icon': "fa fa-database", 'label':"Sub-item 3"}]},
|
||||
#{'icon': "far fa-chart-bar", 'label':"Chart"},#no tooltip message
|
||||
#{'id':' Crazy return value 💀','icon': "💀", 'label':"Calendar"},
|
||||
#{'icon': "fas fa-tachometer-alt", 'label':"Dashboard",'ttip':"I'm the Dashboard tooltip!"}, #can add a tooltip message
|
||||
#{'icon': "far fa-copy", 'label':"Right End"},
|
||||
#{'icon': "fa-solid fa-radar",'label':"Dropdown2", 'submenu':[{'label':"Sub-item 1", 'icon': "fa fa-meh"},{'label':"Sub-item 2"},{'icon':'🙉','label':"Sub-item 3",}]},
|
||||
]
|
||||
|
||||
over_theme = {'txc_inactive': '#FFFFFF', "menu_background":'#000000'}
|
||||
|
||||
menu_id = hc.nav_bar(
|
||||
menu_definition=menu_data,
|
||||
#home_name='Home',
|
||||
#login_name='Logout',
|
||||
hide_streamlit_markers=False,
|
||||
override_theme=over_theme,
|
||||
sticky_nav=True,
|
||||
sticky_mode='pinned',
|
||||
)
|
||||
|
||||
# check if the models exist on their respective folders
|
||||
with server_state_lock["GFPGAN_available"]:
|
||||
if os.path.exists(os.path.join(st.session_state["defaults"].general.GFPGAN_dir, f"{st.session_state['defaults'].general.GFPGAN_model}.pth")):
|
||||
server_state["GFPGAN_available"] = True
|
||||
else:
|
||||
server_state["GFPGAN_available"] = False
|
||||
|
||||
with server_state_lock["RealESRGAN_available"]:
|
||||
if os.path.exists(os.path.join(st.session_state["defaults"].general.RealESRGAN_dir, f"{st.session_state['defaults'].general.RealESRGAN_model}.pth")):
|
||||
server_state["RealESRGAN_available"] = True
|
||||
else:
|
||||
server_state["RealESRGAN_available"] = False
|
||||
|
||||
#with st.sidebar:
|
||||
#page = on_hover_tabs(tabName=['Stable Diffusion', "Textual Inversion","Model Manager","Settings"],
|
||||
#iconName=['dashboard','model_training' ,'cloud_download', 'settings'], default_choice=0)
|
||||
|
||||
# need to see how to get the icons to show for the hydralit option_bar
|
||||
#page = hc.option_bar([{'icon':'grid-outline','label':'Stable Diffusion'}, {'label':"Textual Inversion"},
|
||||
#{'label':"Model Manager"},{'label':"Settings"}],
|
||||
#horizontal_orientation=False,
|
||||
#override_theme={'txc_inactive': 'white','menu_background':'#111', 'stVerticalBlock': '#111','txc_active':'yellow','option_active':'blue'})
|
||||
|
||||
if menu_id == "Stable Diffusion":
|
||||
# set the page url and title
|
||||
#st.experimental_set_query_params(page='stable-diffusion')
|
||||
try:
|
||||
set_page_title("Stable Diffusion Playground")
|
||||
except NameError:
|
||||
st.experimental_rerun()
|
||||
|
||||
txt2img_tab, img2img_tab, txt2vid_tab, img2txt_tab, concept_library_tab = st.tabs(["Text-to-Image", "Image-to-Image",
|
||||
"Text-to-Video", "Image-To-Text",
|
||||
"Concept Library"])
|
||||
#with home_tab:
|
||||
#from home import layout
|
||||
#layout()
|
||||
|
||||
with txt2img_tab:
|
||||
from txt2img import layout
|
||||
layout()
|
||||
|
||||
with img2img_tab:
|
||||
from img2img import layout
|
||||
layout()
|
||||
|
||||
#with inpainting_tab:
|
||||
#from inpainting import layout
|
||||
#layout()
|
||||
|
||||
with txt2vid_tab:
|
||||
from txt2vid import layout
|
||||
layout()
|
||||
|
||||
with img2txt_tab:
|
||||
from img2txt import layout
|
||||
layout()
|
||||
|
||||
with concept_library_tab:
|
||||
from sd_concept_library import layout
|
||||
layout()
|
||||
|
||||
#
|
||||
elif menu_id == 'Model Manager':
|
||||
set_page_title("Model Manager - Stable Diffusion Playground")
|
||||
|
||||
from ModelManager import layout
|
||||
layout()
|
||||
|
||||
elif menu_id == 'Textual Inversion':
|
||||
from textual_inversion import layout
|
||||
layout()
|
||||
|
||||
elif menu_id == 'API Server':
|
||||
set_page_title("API Server - Stable Diffusion Playground")
|
||||
from APIServer import layout
|
||||
layout()
|
||||
|
||||
elif menu_id == 'Settings':
|
||||
set_page_title("Settings - Stable Diffusion Playground")
|
||||
|
||||
from Settings import layout
|
||||
layout()
|
||||
|
||||
# calling dragable input component module at the end, so it works on all pages
|
||||
draggable_number_input.load()
|
||||
|
||||
with server_state_lock["RealESRGAN_available"]:
|
||||
if os.path.exists(os.path.join(st.session_state["defaults"].general.RealESRGAN_dir, f"{st.session_state['defaults'].general.RealESRGAN_model}.pth")):
|
||||
server_state["RealESRGAN_available"] = True
|
||||
else:
|
||||
server_state["RealESRGAN_available"] = False
|
||||
|
||||
with st.sidebar:
|
||||
tabs = on_hover_tabs(tabName=['Stable Diffusion', "Textual Inversion","Model Manager","Settings"],
|
||||
iconName=['dashboard','model_training' ,'cloud_download', 'settings'], default_choice=0)
|
||||
|
||||
# need to see how to get the icons to show for the hydralit option_bar
|
||||
#tabs = hc.option_bar([{'icon':'grid-outline','label':'Stable Diffusion'}, {'label':"Textual Inversion"},
|
||||
#{'label':"Model Manager"},{'label':"Settings"}],
|
||||
#horizontal_orientation=False,
|
||||
#override_theme={'txc_inactive': 'white','menu_background':'#111', 'stVerticalBlock': '#111','txc_active':'yellow','option_active':'blue'})
|
||||
|
||||
if tabs =='Stable Diffusion':
|
||||
# set the page url and title
|
||||
st.experimental_set_query_params(page='stable-diffusion')
|
||||
try:
|
||||
set_page_title("Stable Diffusion Playground")
|
||||
except NameError:
|
||||
st.experimental_rerun()
|
||||
|
||||
txt2img_tab, img2img_tab, txt2vid_tab, img2txt_tab, concept_library_tab = st.tabs(["Text-to-Image", "Image-to-Image",
|
||||
"Text-to-Video", "Image-To-Text",
|
||||
"Concept Library"])
|
||||
#with home_tab:
|
||||
#from home import layout
|
||||
#layout()
|
||||
|
||||
with txt2img_tab:
|
||||
from txt2img import layout
|
||||
layout()
|
||||
|
||||
with img2img_tab:
|
||||
from img2img import layout
|
||||
layout()
|
||||
|
||||
with txt2vid_tab:
|
||||
from txt2vid import layout
|
||||
layout()
|
||||
|
||||
with img2txt_tab:
|
||||
from img2txt import layout
|
||||
layout()
|
||||
|
||||
with concept_library_tab:
|
||||
from sd_concept_library import layout
|
||||
layout()
|
||||
|
||||
#
|
||||
elif tabs == 'Model Manager':
|
||||
set_page_title("Model Manager - Stable Diffusion Playground")
|
||||
|
||||
from ModelManager import layout
|
||||
layout()
|
||||
|
||||
elif tabs == 'Textual Inversion':
|
||||
from textual_inversion import layout
|
||||
layout()
|
||||
|
||||
elif tabs == 'Settings':
|
||||
set_page_title("Settings - Stable Diffusion Playground")
|
||||
|
||||
from Settings import layout
|
||||
layout()
|
||||
|
||||
if __name__ == '__main__':
|
||||
layout()
|
||||
set_logger_verbosity(opt.verbosity)
|
||||
quiesce_logger(opt.quiet)
|
||||
|
||||
if not opt.headless:
|
||||
layout()
|
||||
|
||||
with server_state_lock["bridge"]:
|
||||
if server_state["bridge"]:
|
||||
try:
|
||||
import bridgeData as cd
|
||||
except ModuleNotFoundError as e:
|
||||
logger.warning("No bridgeData found. Falling back to default where no CLI args are set.")
|
||||
logger.debug(str(e))
|
||||
except SyntaxError as e:
|
||||
logger.warning("bridgeData found, but is malformed. Falling back to default where no CLI args are set.")
|
||||
logger.debug(str(e))
|
||||
except Exception as e:
|
||||
logger.warning("No bridgeData found, use default where no CLI args are set")
|
||||
logger.debug(str(e))
|
||||
finally:
|
||||
try: # check if cd exists (i.e. bridgeData loaded properly)
|
||||
cd
|
||||
except: # if not, create defaults
|
||||
class temp(object):
|
||||
def __init__(self):
|
||||
random.seed()
|
||||
self.horde_url = "https://stablehorde.net"
|
||||
# Give a cool name to your instance
|
||||
self.horde_name = f"Automated Instance #{random.randint(-100000000, 100000000)}"
|
||||
# The api_key identifies a unique user in the horde
|
||||
self.horde_api_key = "0000000000"
|
||||
# Put other users whose prompts you want to prioritize.
|
||||
# The owner's username is always included so you don't need to add it here, unless you want it to have lower priority than another user
|
||||
self.horde_priority_usernames = []
|
||||
self.horde_max_power = 8
|
||||
self.nsfw = True
|
||||
self.censor_nsfw = False
|
||||
self.blacklist = []
|
||||
self.censorlist = []
|
||||
self.models_to_load = ["stable_diffusion"]
|
||||
cd = temp()
|
||||
horde_api_key = opt.horde_api_key if opt.horde_api_key else cd.horde_api_key
|
||||
horde_name = opt.horde_name if opt.horde_name else cd.horde_name
|
||||
horde_url = opt.horde_url if opt.horde_url else cd.horde_url
|
||||
horde_priority_usernames = opt.horde_priority_usernames if opt.horde_priority_usernames else cd.horde_priority_usernames
|
||||
horde_max_power = opt.horde_max_power if opt.horde_max_power else cd.horde_max_power
|
||||
# Not used yet
|
||||
horde_models = [opt.horde_model] if opt.horde_model else cd.models_to_load
|
||||
try:
|
||||
horde_nsfw = not opt.horde_sfw if opt.horde_sfw else cd.horde_nsfw
|
||||
except AttributeError:
|
||||
horde_nsfw = True
|
||||
try:
|
||||
horde_censor_nsfw = opt.horde_censor_nsfw if opt.horde_censor_nsfw else cd.horde_censor_nsfw
|
||||
except AttributeError:
|
||||
horde_censor_nsfw = False
|
||||
try:
|
||||
horde_blacklist = opt.horde_blacklist if opt.horde_blacklist else cd.horde_blacklist
|
||||
except AttributeError:
|
||||
horde_blacklist = []
|
||||
try:
|
||||
horde_censorlist = opt.horde_censorlist if opt.horde_censorlist else cd.horde_censorlist
|
||||
except AttributeError:
|
||||
horde_censorlist = []
|
||||
if horde_max_power < 2:
|
||||
horde_max_power = 2
|
||||
horde_max_pixels = 64*64*8*horde_max_power
|
||||
logger.info(f"Joining Horde with parameters: Server Name '{horde_name}'. Horde URL '{horde_url}'. Max Pixels {horde_max_pixels}")
|
||||
|
||||
try:
|
||||
thread = threading.Thread(target=run_bridge(1, horde_api_key, horde_name, horde_url,
|
||||
horde_priority_usernames, horde_max_pixels,
|
||||
horde_nsfw, horde_censor_nsfw, horde_blacklist,
|
||||
horde_censorlist), args=())
|
||||
thread.daemon = True
|
||||
thread.start()
|
||||
#run_bridge(1, horde_api_key, horde_name, horde_url, horde_priority_usernames, horde_max_pixels, horde_nsfw, horde_censor_nsfw, horde_blacklist, horde_censorlist)
|
||||
except KeyboardInterrupt:
|
||||
print(f"Keyboard Interrupt Received. Ending Bridge")
|
@ -58,20 +58,23 @@ IF "%v_conda_path%"=="" (
|
||||
|
||||
:CONDA_FOUND
|
||||
echo Stashing local changes and pulling latest update...
|
||||
git status --porcelain=1 -uno | findstr . && set "HasChanges=1" || set "HasChanges=0"
|
||||
call git stash
|
||||
call git pull
|
||||
IF "%HasChanges%" == "0" GOTO SKIP_RESTORE
|
||||
|
||||
set /P restore="Do you want to restore changes you made before updating? (Y/N): "
|
||||
IF /I "%restore%" == "N" (
|
||||
echo Removing changes please wait...
|
||||
echo Removing changes...
|
||||
call git stash drop
|
||||
echo Changes removed, press any key to continue...
|
||||
pause >nul
|
||||
echo Changes removed
|
||||
) ELSE IF /I "%restore%" == "Y" (
|
||||
echo Restoring changes, please wait...
|
||||
echo Restoring changes...
|
||||
call git stash pop --quiet
|
||||
echo Changes restored, press any key to continue...
|
||||
pause >nul
|
||||
echo Changes restored
|
||||
)
|
||||
|
||||
:SKIP_RESTORE
|
||||
call "%v_conda_path%\Scripts\activate.bat"
|
||||
|
||||
for /f "delims=" %%a in ('git log -1 --format^="%%H" -- environment.yaml') DO set v_cur_hash=%%a
|
||||
|
15
webui.cmd
15
webui.cmd
@ -62,20 +62,23 @@ IF "%v_conda_path%"=="" (
|
||||
|
||||
:CONDA_FOUND
|
||||
echo Stashing local changes and pulling latest update...
|
||||
git status --porcelain=1 -uno | findstr . && set "HasChanges=1" || set "HasChanges=0"
|
||||
call git stash
|
||||
call git pull
|
||||
IF "%HasChanges%" == "0" GOTO SKIP_RESTORE
|
||||
|
||||
set /P restore="Do you want to restore changes you made before updating? (Y/N): "
|
||||
IF /I "%restore%" == "N" (
|
||||
echo Removing changes please wait...
|
||||
echo Removing changes...
|
||||
call git stash drop
|
||||
echo Changes removed, press any key to continue...
|
||||
pause >nul
|
||||
echo Changes removed
|
||||
) ELSE IF /I "%restore%" == "Y" (
|
||||
echo Restoring changes, please wait...
|
||||
echo Restoring changes...
|
||||
call git stash pop --quiet
|
||||
echo Changes restored, press any key to continue...
|
||||
pause >nul
|
||||
echo Changes restored
|
||||
)
|
||||
|
||||
:SKIP_RESTORE
|
||||
call "%v_conda_path%\Scripts\activate.bat"
|
||||
|
||||
for /f "delims=" %%a in ('git log -1 --format^="%%H" -- environment.yaml') DO set v_cur_hash=%%a
|
||||
|
30
webui.sh
30
webui.sh
@ -1,4 +1,5 @@
|
||||
#!/bin/bash -i
|
||||
|
||||
# This file is part of stable-diffusion-webui (https://github.com/sd-webui/stable-diffusion-webui/).
|
||||
|
||||
# Copyright 2022 sd-webui team.
|
||||
@ -93,22 +94,6 @@ conda_env_activation () {
|
||||
conda info | grep active
|
||||
}
|
||||
|
||||
# Check to see if the SD model already exists, if not then it creates it and prompts the user to add the SD AI models to the repo directory
|
||||
sd_model_loading () {
|
||||
if [ -f "$DIRECTORY/models/ldm/stable-diffusion-v1/model.ckpt" ]; then
|
||||
printf "AI Model already in place. Continuing...\n\n"
|
||||
else
|
||||
printf "\n\n########## MOVE MODEL FILE ##########\n\n"
|
||||
printf "Please download the 1.4 AI Model from Huggingface (or another source) and place it inside of the stable-diffusion-webui folder\n\n"
|
||||
read -p "Once you have sd-v1-4.ckpt in the project root, Press Enter...\n\n"
|
||||
|
||||
# Check to make sure checksum of models is the original one from HuggingFace and not a fake model set
|
||||
printf "fe4efff1e174c627256e44ec2991ba279b3816e364b49f9be2abc0b3ff3f8556 sd-v1-4.ckpt" | sha256sum --check || exit 1
|
||||
mv sd-v1-4.ckpt $DIRECTORY/models/ldm/stable-diffusion-v1/model.ckpt
|
||||
rm -r ./Models
|
||||
fi
|
||||
}
|
||||
|
||||
# Checks to see if the upscaling models exist in their correct locations. If they do not they will be downloaded as required
|
||||
post_processor_model_loading () {
|
||||
# Check to see if GFPGAN has been added yet, if not it will download it and place it in the proper directory
|
||||
@ -162,6 +147,13 @@ post_processor_model_loading () {
|
||||
|
||||
# Show the user a prompt asking them which version of the WebUI they wish to use, Streamlit or Gradio
|
||||
launch_webui () {
|
||||
# skip the prompt if --bridge command-line argument is detected
|
||||
for arg in "$@"; do
|
||||
if [ "$arg" == "--bridge" ]; then
|
||||
python -u scripts/relauncher.py "$@"
|
||||
return
|
||||
fi
|
||||
done
|
||||
printf "\n\n########## LAUNCH USING STREAMLIT OR GRADIO? ##########\n\n"
|
||||
printf "Do you wish to run the WebUI using the Gradio or StreamLit Interface?\n\n"
|
||||
printf "Streamlit: \nHas A More Modern UI \nMore Features Planned \nWill Be The Main UI Going Forward \nCurrently In Active Development \nMissing Some Gradio Features\n\n"
|
||||
@ -181,9 +173,9 @@ start_initialization () {
|
||||
sd_model_loading
|
||||
post_processor_model_loading
|
||||
conda_env_activation
|
||||
if [ ! -e "models/ldm/stable-diffusion-v1/model.ckpt" ]; then
|
||||
echo "Your model file does not exist! Place it in 'models/ldm/stable-diffusion-v1' with the name 'model.ckpt'."
|
||||
exit 1
|
||||
if [ ! -e "models/ldm/stable-diffusion-v1/*.ckpt" ]; then
|
||||
echo "Your model file does not exist! Streamlit will handle this automatically, however Gradio still requires this file be placed manually. If you intend to use the Gradio interface, place it in 'models/ldm/stable-diffusion-v1' with the name 'model.ckpt'."
|
||||
read -p "Once you have sd-v1-4.ckpt in the project root, if you are going to use Gradio, Press Enter...\n\n"
|
||||
fi
|
||||
launch_webui "$@"
|
||||
|
||||
|
Loading…
Reference in New Issue
Block a user