sygil-webui/environment.yaml

82 lines
2.4 KiB
YAML
Raw Normal View History

name: ldm
2022-09-26 16:02:48 +03:00
# This file is part of stable-diffusion-webui (https://github.com/sd-webui/stable-diffusion-webui/).
# Copyright 2022 sd-webui team.
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
# You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
channels:
- pytorch
- defaults
# Psst. If you change a dependency, make sure it's mirrored in the docker requirement
# files as well.
dependencies:
- cudatoolkit=11.3
- git
- numpy=1.22.3
- pip=20.3
- python=3.8.5
- pytorch=1.11.0
- scikit-image=0.19.2
- torchvision=0.12.0
- pip:
- -e .
- -e git+https://github.com/CompVis/taming-transformers#egg=taming-transformers
- -e git+https://github.com/openai/CLIP#egg=clip
- -e git+https://github.com/hlky/k-diffusion-sd#egg=k_diffusion
- -e git+https://github.com/devilismyfriend/latent-diffusion#egg=latent-diffusion
- accelerate==0.12.0
- albumentations==0.4.3
- basicsr>=1.3.4.0
- diffusers==0.3.0
Scene-to-Image Prompt Layering System (#1179) # Summary of the change - new Scene-to-Image tab - new scn2img function - functions for loading and running monocular_depth_estimation with tensorflow # Description (relevant motivation, which issue is fixed) Related to discussion #925 > Would it be possible to have a layers system where we could do have foreground, mid, and background objects which relate to one another and share the style? So we could say generate a landscape, one another layer generate a castle, and on another layer generate a crowd of people. To make this work I made a prompt-based layering system in a new "Scene-to-Image" tab. You write a a multi-line prompt that looks like markdown, where each section declares one layer. It is hierarchical, so each layer can have their own child layers. Examples: https://imgur.com/a/eUxd5qn ![](https://i.imgur.com/L61w00Q.png) In the frontend you can find a brief documentation for the syntax, examples and reference for the various arguments. Here a short summary: Sections with "prompt" and child layers are img2img, without child layers they are txt2img. Without "prompt" they are just images, useful for mask selection, image composition, etc. Images can be initialized with "color", resized with "resize" and their position specified with "pos". Rotation and rotation center are "rotation" and "center". Mask can automatically be selected by color or by estimated depth based on https://huggingface.co/spaces/atsantiago/Monocular_Depth_Filter. ![](https://i.imgur.com/8rMHWmZ.png) # Additional dependencies that are required for this change For mask selection by monocular depth estimation tensorflow is required and the model must be cloned to ./src/monocular_depth_estimation/ Changes in environment.yaml: - einops>=0.3.0 - tensorflow>=2.10.0 Einops must be allowed to be newer for tensorflow to work. # Checklist: - [x] I have changed the base branch to `dev` - [x] I have performed a self-review of my own code - [x] I have commented my code in hard-to-understand areas - [x] I have made corresponding changes to the documentation Co-authored-by: hlky <106811348+hlky@users.noreply.github.com>
2022-10-02 20:23:37 +03:00
- einops==0.3.1
- facexlib>=0.2.3
- ftfy==6.1.1
- fairscale==0.4.4
- gradio==3.1.6
- gfpgan==1.3.8
- hydralit_components==1.0.10
- hydralit==1.0.14
- imageio-ffmpeg==0.4.2
- imageio==2.9.0
- kornia==0.6
- loguru
- omegaconf==2.1.1
- opencv-python-headless==4.6.0.66
- open-clip-torch==2.0.2
- pandas==1.4.3
- piexif==1.1.3
- pudb==2019.2
- pynvml==11.4.1
- python-slugify>=6.1.2
- pytorch-lightning==1.4.2
Dev merge (#819) * #715 #699 #698 #663 #625 #617 #611 #604 (#716) * Update README.md * Add sampler name to metadata (#695) Co-authored-by: EliEron <example@example.com> * old-dev-merge Co-authored-by: EliEron <subanimehd@gmail.com> Co-authored-by: EliEron <example@example.com> * img2img-fix (#717) * Revert "img2img-fix (#717)" This reverts commit 70d4b1ca2a27ff6e67aada0a47cb02670adfe056. * img2img fixes * Revert "img2img fixes" This reverts commit e66eddc6217d37deaa5e3086366a6f208a688969. * Revert "Revert "img2img-fix (#717)"" This reverts commit bf08b617d4fc97551cd9f264556b2e875e54b831. * img2img fixed * - Removed duplicated calls to save_sample. - Change variables and arguments to be more self-explanatory and easier to understand what they do. * Moved streamlit files to their proper location, before they were incorrectly added to the repository root folder. * Added retry dependency for the streamlit version. * Added .cmd file for easy running and updating the streamlit version of the UI. * Removed duplicated entry for streamlit on the environment.yaml file. * Removed some unnecessary lines from the the webui_streamlit.cmd file. * add gfpgan folder to gitignore, auto gen by imglab * added placeholder text similar to gradio * added auto conversion for 4 channel PNG to RGB * fix: regex escape characters * Update Readme links to sd-webui when appropriate (#781) * Update link to sd-webui when appropriate * added LDSR instruction per devilismyfriend guide * fix: stack overflow during recursion call (#784) * Added option to set default sampler name from config file, will be useful for those wanting to change the default sampler and have it persist even when closing the UI and opening it again. * Added try and except block to handle basic errors like StopException which is raised by streamlit when you hit the stop button and KeyError which happens also when stopping the generation because it tries to check the model at the end which is not loaded at that time, this can be ignored and so thats the reason for the exception. * separate css to external file * Added "git pull" and "git stash" to the commands run by the cmd scripts when launching the UI, this should make it so people who use it can automatically update the code from the repo and be up to date without manually using those commands everytime. * resolve conflict with master Co-authored-by: EliEron <subanimehd@gmail.com> Co-authored-by: EliEron <example@example.com> Co-authored-by: ZeroCool <ZeroCool940711@users.noreply.github.com> Co-authored-by: ZeroCool940711 <alejandrogilelias940711@gmail.com> Co-authored-by: Hafiidz <3688500+Hafiidz@users.noreply.github.com> Co-authored-by: Thomas Mello <work.mello@gmail.com>
2022-09-08 13:41:04 +03:00
- retry>=0.9.2
- regex
- realesrgan==0.3.0
- streamlit==1.13.0
- streamlit-on-Hover-tabs==1.0.1
- streamlit-option-menu==0.3.2
- streamlit_nested_layout
- streamlit-server-state==0.14.2
- streamlit-tensorboard==0.0.2
- test-tube>=0.7.5
- tensorboard==2.10.1
Scene-to-Image Prompt Layering System (#1179) # Summary of the change - new Scene-to-Image tab - new scn2img function - functions for loading and running monocular_depth_estimation with tensorflow # Description (relevant motivation, which issue is fixed) Related to discussion #925 > Would it be possible to have a layers system where we could do have foreground, mid, and background objects which relate to one another and share the style? So we could say generate a landscape, one another layer generate a castle, and on another layer generate a crowd of people. To make this work I made a prompt-based layering system in a new "Scene-to-Image" tab. You write a a multi-line prompt that looks like markdown, where each section declares one layer. It is hierarchical, so each layer can have their own child layers. Examples: https://imgur.com/a/eUxd5qn ![](https://i.imgur.com/L61w00Q.png) In the frontend you can find a brief documentation for the syntax, examples and reference for the various arguments. Here a short summary: Sections with "prompt" and child layers are img2img, without child layers they are txt2img. Without "prompt" they are just images, useful for mask selection, image composition, etc. Images can be initialized with "color", resized with "resize" and their position specified with "pos". Rotation and rotation center are "rotation" and "center". Mask can automatically be selected by color or by estimated depth based on https://huggingface.co/spaces/atsantiago/Monocular_Depth_Filter. ![](https://i.imgur.com/8rMHWmZ.png) # Additional dependencies that are required for this change For mask selection by monocular depth estimation tensorflow is required and the model must be cloned to ./src/monocular_depth_estimation/ Changes in environment.yaml: - einops>=0.3.0 - tensorflow>=2.10.0 Einops must be allowed to be newer for tensorflow to work. # Checklist: - [x] I have changed the base branch to `dev` - [x] I have performed a self-review of my own code - [x] I have commented my code in hard-to-understand areas - [x] I have made corresponding changes to the documentation Co-authored-by: hlky <106811348+hlky@users.noreply.github.com>
2022-10-02 20:23:37 +03:00
- timm==0.6.7
- torch-fidelity==0.3.0
- torchmetrics==0.6.0
2022-09-11 09:32:11 +03:00
- transformers==4.19.2
Scene-to-Image Prompt Layering System (#1179) # Summary of the change - new Scene-to-Image tab - new scn2img function - functions for loading and running monocular_depth_estimation with tensorflow # Description (relevant motivation, which issue is fixed) Related to discussion #925 > Would it be possible to have a layers system where we could do have foreground, mid, and background objects which relate to one another and share the style? So we could say generate a landscape, one another layer generate a castle, and on another layer generate a crowd of people. To make this work I made a prompt-based layering system in a new "Scene-to-Image" tab. You write a a multi-line prompt that looks like markdown, where each section declares one layer. It is hierarchical, so each layer can have their own child layers. Examples: https://imgur.com/a/eUxd5qn ![](https://i.imgur.com/L61w00Q.png) In the frontend you can find a brief documentation for the syntax, examples and reference for the various arguments. Here a short summary: Sections with "prompt" and child layers are img2img, without child layers they are txt2img. Without "prompt" they are just images, useful for mask selection, image composition, etc. Images can be initialized with "color", resized with "resize" and their position specified with "pos". Rotation and rotation center are "rotation" and "center". Mask can automatically be selected by color or by estimated depth based on https://huggingface.co/spaces/atsantiago/Monocular_Depth_Filter. ![](https://i.imgur.com/8rMHWmZ.png) # Additional dependencies that are required for this change For mask selection by monocular depth estimation tensorflow is required and the model must be cloned to ./src/monocular_depth_estimation/ Changes in environment.yaml: - einops>=0.3.0 - tensorflow>=2.10.0 Einops must be allowed to be newer for tensorflow to work. # Checklist: - [x] I have changed the base branch to `dev` - [x] I have performed a self-review of my own code - [x] I have commented my code in hard-to-understand areas - [x] I have made corresponding changes to the documentation Co-authored-by: hlky <106811348+hlky@users.noreply.github.com>
2022-10-02 20:23:37 +03:00
- tensorflow==2.10.0
- tqdm==4.64.0
- stqdm==0.0.4
2022-10-04 21:58:26 +03:00
- wget
Scene-to-Image Prompt Layering System (#1179) # Summary of the change - new Scene-to-Image tab - new scn2img function - functions for loading and running monocular_depth_estimation with tensorflow # Description (relevant motivation, which issue is fixed) Related to discussion #925 > Would it be possible to have a layers system where we could do have foreground, mid, and background objects which relate to one another and share the style? So we could say generate a landscape, one another layer generate a castle, and on another layer generate a crowd of people. To make this work I made a prompt-based layering system in a new "Scene-to-Image" tab. You write a a multi-line prompt that looks like markdown, where each section declares one layer. It is hierarchical, so each layer can have their own child layers. Examples: https://imgur.com/a/eUxd5qn ![](https://i.imgur.com/L61w00Q.png) In the frontend you can find a brief documentation for the syntax, examples and reference for the various arguments. Here a short summary: Sections with "prompt" and child layers are img2img, without child layers they are txt2img. Without "prompt" they are just images, useful for mask selection, image composition, etc. Images can be initialized with "color", resized with "resize" and their position specified with "pos". Rotation and rotation center are "rotation" and "center". Mask can automatically be selected by color or by estimated depth based on https://huggingface.co/spaces/atsantiago/Monocular_Depth_Filter. ![](https://i.imgur.com/8rMHWmZ.png) # Additional dependencies that are required for this change For mask selection by monocular depth estimation tensorflow is required and the model must be cloned to ./src/monocular_depth_estimation/ Changes in environment.yaml: - einops>=0.3.0 - tensorflow>=2.10.0 Einops must be allowed to be newer for tensorflow to work. # Checklist: - [x] I have changed the base branch to `dev` - [x] I have performed a self-review of my own code - [x] I have commented my code in hard-to-understand areas - [x] I have made corresponding changes to the documentation Co-authored-by: hlky <106811348+hlky@users.noreply.github.com>
2022-10-02 20:23:37 +03:00