2022-09-07 01:50:14 +03:00
|
|
|
name: ldm
|
2022-09-26 16:02:48 +03:00
|
|
|
# This file is part of stable-diffusion-webui (https://github.com/sd-webui/stable-diffusion-webui/).
|
|
|
|
|
|
|
|
# Copyright 2022 sd-webui team.
|
|
|
|
# This program is free software: you can redistribute it and/or modify
|
|
|
|
# it under the terms of the GNU Affero General Public License as published by
|
|
|
|
# the Free Software Foundation, either version 3 of the License, or
|
|
|
|
# (at your option) any later version.
|
|
|
|
|
|
|
|
# This program is distributed in the hope that it will be useful,
|
|
|
|
# but WITHOUT ANY WARRANTY; without even the implied warranty of
|
|
|
|
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
|
|
|
# GNU Affero General Public License for more details.
|
|
|
|
|
|
|
|
# You should have received a copy of the GNU Affero General Public License
|
|
|
|
# along with this program. If not, see <http://www.gnu.org/licenses/>.
|
2022-09-07 01:50:14 +03:00
|
|
|
channels:
|
|
|
|
- pytorch
|
|
|
|
- defaults
|
2022-09-25 22:14:10 +03:00
|
|
|
# Psst. If you change a dependency, make sure it's mirrored in the docker requirement
|
|
|
|
# files as well.
|
2022-09-07 01:50:14 +03:00
|
|
|
dependencies:
|
2022-09-11 04:18:15 +03:00
|
|
|
- cudatoolkit=11.3
|
2022-09-07 01:50:14 +03:00
|
|
|
- git
|
2022-09-11 04:18:15 +03:00
|
|
|
- numpy=1.22.3
|
2022-09-07 01:50:14 +03:00
|
|
|
- pip=20.3
|
2022-09-11 04:18:15 +03:00
|
|
|
- python=3.8.5
|
2022-09-07 01:50:14 +03:00
|
|
|
- pytorch=1.11.0
|
2022-09-11 04:18:15 +03:00
|
|
|
- scikit-image=0.19.2
|
2022-09-07 01:50:14 +03:00
|
|
|
- torchvision=0.12.0
|
|
|
|
- pip:
|
2022-09-11 04:18:15 +03:00
|
|
|
- -e .
|
|
|
|
- -e git+https://github.com/CompVis/taming-transformers#egg=taming-transformers
|
|
|
|
- -e git+https://github.com/openai/CLIP#egg=clip
|
|
|
|
- -e git+https://github.com/hlky/k-diffusion-sd#egg=k_diffusion
|
|
|
|
- -e git+https://github.com/devilismyfriend/latent-diffusion#egg=latent-diffusion
|
|
|
|
- accelerate==0.12.0
|
2022-09-07 01:50:14 +03:00
|
|
|
- albumentations==0.4.3
|
2022-09-11 04:18:15 +03:00
|
|
|
- basicsr>=1.3.4.0
|
2022-09-16 21:50:22 +03:00
|
|
|
- diffusers==0.3.0
|
Scene-to-Image Prompt Layering System (#1179)
# Summary of the change
- new Scene-to-Image tab
- new scn2img function
- functions for loading and running monocular_depth_estimation with
tensorflow
# Description
(relevant motivation, which issue is fixed)
Related to discussion #925
> Would it be possible to have a layers system where we could do have
foreground, mid, and background objects which relate to one another and
share the style? So we could say generate a landscape, one another layer
generate a castle, and on another layer generate a crowd of people.
To make this work I made a prompt-based layering system in a new
"Scene-to-Image" tab.
You write a a multi-line prompt that looks like markdown, where each
section declares one layer.
It is hierarchical, so each layer can have their own child layers.
Examples: https://imgur.com/a/eUxd5qn
![](https://i.imgur.com/L61w00Q.png)
In the frontend you can find a brief documentation for the syntax,
examples and reference for the various arguments.
Here a short summary:
Sections with "prompt" and child layers are img2img, without child
layers they are txt2img.
Without "prompt" they are just images, useful for mask selection, image
composition, etc.
Images can be initialized with "color", resized with "resize" and their
position specified with "pos".
Rotation and rotation center are "rotation" and "center".
Mask can automatically be selected by color or by estimated depth based
on https://huggingface.co/spaces/atsantiago/Monocular_Depth_Filter.
![](https://i.imgur.com/8rMHWmZ.png)
# Additional dependencies that are required for this change
For mask selection by monocular depth estimation tensorflow is required
and the model must be cloned to ./src/monocular_depth_estimation/
Changes in environment.yaml:
- einops>=0.3.0
- tensorflow>=2.10.0
Einops must be allowed to be newer for tensorflow to work.
# Checklist:
- [x] I have changed the base branch to `dev`
- [x] I have performed a self-review of my own code
- [x] I have commented my code in hard-to-understand areas
- [x] I have made corresponding changes to the documentation
Co-authored-by: hlky <106811348+hlky@users.noreply.github.com>
2022-10-02 20:23:37 +03:00
|
|
|
- einops==0.3.1
|
2022-09-11 04:18:15 +03:00
|
|
|
- facexlib>=0.2.3
|
2022-09-28 22:37:15 +03:00
|
|
|
- ftfy==6.1.1
|
|
|
|
- fairscale==0.4.4
|
2022-09-07 01:50:14 +03:00
|
|
|
- gradio==3.1.6
|
2022-10-02 23:11:30 +03:00
|
|
|
- gfpgan==1.3.8
|
2022-09-28 19:33:54 +03:00
|
|
|
- hydralit_components==1.0.10
|
2022-10-03 18:59:47 +03:00
|
|
|
- hydralit==1.0.14
|
2022-09-11 04:18:15 +03:00
|
|
|
- imageio-ffmpeg==0.4.2
|
|
|
|
- imageio==2.9.0
|
|
|
|
- kornia==0.6
|
2022-10-04 15:01:07 +03:00
|
|
|
- loguru
|
2022-09-11 04:18:15 +03:00
|
|
|
- omegaconf==2.1.1
|
|
|
|
- opencv-python-headless==4.6.0.66
|
2022-09-30 18:47:30 +03:00
|
|
|
- open-clip-torch==2.0.2
|
2022-09-11 13:35:14 +03:00
|
|
|
- pandas==1.4.3
|
2022-09-18 12:31:17 +03:00
|
|
|
- piexif==1.1.3
|
2022-09-28 22:37:15 +03:00
|
|
|
- pycocotools==2.0.5
|
|
|
|
- pycocoevalcap==1.2
|
2022-09-11 04:18:15 +03:00
|
|
|
- pudb==2019.2
|
2022-09-07 01:50:14 +03:00
|
|
|
- pynvml==11.4.1
|
|
|
|
- python-slugify>=6.1.2
|
2022-09-11 04:18:15 +03:00
|
|
|
- pytorch-lightning==1.4.2
|
2022-09-08 13:41:04 +03:00
|
|
|
- retry>=0.9.2
|
2022-09-28 22:37:15 +03:00
|
|
|
- regex
|
2022-09-27 08:44:15 +03:00
|
|
|
- realesrgan==0.3.0
|
2022-09-25 04:09:28 +03:00
|
|
|
- streamlit==1.13.0
|
2022-09-12 06:15:07 +03:00
|
|
|
- streamlit-on-Hover-tabs==1.0.1
|
|
|
|
- streamlit-option-menu==0.3.2
|
2022-09-15 13:48:55 +03:00
|
|
|
- streamlit_nested_layout
|
2022-09-25 09:23:02 +03:00
|
|
|
- streamlit-server-state==0.14.2
|
2022-09-27 21:36:48 +03:00
|
|
|
- streamlit-tensorboard==0.0.2
|
2022-09-11 04:18:15 +03:00
|
|
|
- test-tube>=0.7.5
|
2022-09-27 21:36:48 +03:00
|
|
|
- tensorboard==2.10.1
|
Scene-to-Image Prompt Layering System (#1179)
# Summary of the change
- new Scene-to-Image tab
- new scn2img function
- functions for loading and running monocular_depth_estimation with
tensorflow
# Description
(relevant motivation, which issue is fixed)
Related to discussion #925
> Would it be possible to have a layers system where we could do have
foreground, mid, and background objects which relate to one another and
share the style? So we could say generate a landscape, one another layer
generate a castle, and on another layer generate a crowd of people.
To make this work I made a prompt-based layering system in a new
"Scene-to-Image" tab.
You write a a multi-line prompt that looks like markdown, where each
section declares one layer.
It is hierarchical, so each layer can have their own child layers.
Examples: https://imgur.com/a/eUxd5qn
![](https://i.imgur.com/L61w00Q.png)
In the frontend you can find a brief documentation for the syntax,
examples and reference for the various arguments.
Here a short summary:
Sections with "prompt" and child layers are img2img, without child
layers they are txt2img.
Without "prompt" they are just images, useful for mask selection, image
composition, etc.
Images can be initialized with "color", resized with "resize" and their
position specified with "pos".
Rotation and rotation center are "rotation" and "center".
Mask can automatically be selected by color or by estimated depth based
on https://huggingface.co/spaces/atsantiago/Monocular_Depth_Filter.
![](https://i.imgur.com/8rMHWmZ.png)
# Additional dependencies that are required for this change
For mask selection by monocular depth estimation tensorflow is required
and the model must be cloned to ./src/monocular_depth_estimation/
Changes in environment.yaml:
- einops>=0.3.0
- tensorflow>=2.10.0
Einops must be allowed to be newer for tensorflow to work.
# Checklist:
- [x] I have changed the base branch to `dev`
- [x] I have performed a self-review of my own code
- [x] I have commented my code in hard-to-understand areas
- [x] I have made corresponding changes to the documentation
Co-authored-by: hlky <106811348+hlky@users.noreply.github.com>
2022-10-02 20:23:37 +03:00
|
|
|
- timm==0.6.7
|
2022-09-11 04:18:15 +03:00
|
|
|
- torch-fidelity==0.3.0
|
|
|
|
- torchmetrics==0.6.0
|
2022-09-11 09:32:11 +03:00
|
|
|
- transformers==4.19.2
|
Scene-to-Image Prompt Layering System (#1179)
# Summary of the change
- new Scene-to-Image tab
- new scn2img function
- functions for loading and running monocular_depth_estimation with
tensorflow
# Description
(relevant motivation, which issue is fixed)
Related to discussion #925
> Would it be possible to have a layers system where we could do have
foreground, mid, and background objects which relate to one another and
share the style? So we could say generate a landscape, one another layer
generate a castle, and on another layer generate a crowd of people.
To make this work I made a prompt-based layering system in a new
"Scene-to-Image" tab.
You write a a multi-line prompt that looks like markdown, where each
section declares one layer.
It is hierarchical, so each layer can have their own child layers.
Examples: https://imgur.com/a/eUxd5qn
![](https://i.imgur.com/L61w00Q.png)
In the frontend you can find a brief documentation for the syntax,
examples and reference for the various arguments.
Here a short summary:
Sections with "prompt" and child layers are img2img, without child
layers they are txt2img.
Without "prompt" they are just images, useful for mask selection, image
composition, etc.
Images can be initialized with "color", resized with "resize" and their
position specified with "pos".
Rotation and rotation center are "rotation" and "center".
Mask can automatically be selected by color or by estimated depth based
on https://huggingface.co/spaces/atsantiago/Monocular_Depth_Filter.
![](https://i.imgur.com/8rMHWmZ.png)
# Additional dependencies that are required for this change
For mask selection by monocular depth estimation tensorflow is required
and the model must be cloned to ./src/monocular_depth_estimation/
Changes in environment.yaml:
- einops>=0.3.0
- tensorflow>=2.10.0
Einops must be allowed to be newer for tensorflow to work.
# Checklist:
- [x] I have changed the base branch to `dev`
- [x] I have performed a self-review of my own code
- [x] I have commented my code in hard-to-understand areas
- [x] I have made corresponding changes to the documentation
Co-authored-by: hlky <106811348+hlky@users.noreply.github.com>
2022-10-02 20:23:37 +03:00
|
|
|
- tensorflow==2.10.0
|
2022-09-28 22:37:15 +03:00
|
|
|
- tqdm==4.64.0
|
Scene-to-Image Prompt Layering System (#1179)
# Summary of the change
- new Scene-to-Image tab
- new scn2img function
- functions for loading and running monocular_depth_estimation with
tensorflow
# Description
(relevant motivation, which issue is fixed)
Related to discussion #925
> Would it be possible to have a layers system where we could do have
foreground, mid, and background objects which relate to one another and
share the style? So we could say generate a landscape, one another layer
generate a castle, and on another layer generate a crowd of people.
To make this work I made a prompt-based layering system in a new
"Scene-to-Image" tab.
You write a a multi-line prompt that looks like markdown, where each
section declares one layer.
It is hierarchical, so each layer can have their own child layers.
Examples: https://imgur.com/a/eUxd5qn
![](https://i.imgur.com/L61w00Q.png)
In the frontend you can find a brief documentation for the syntax,
examples and reference for the various arguments.
Here a short summary:
Sections with "prompt" and child layers are img2img, without child
layers they are txt2img.
Without "prompt" they are just images, useful for mask selection, image
composition, etc.
Images can be initialized with "color", resized with "resize" and their
position specified with "pos".
Rotation and rotation center are "rotation" and "center".
Mask can automatically be selected by color or by estimated depth based
on https://huggingface.co/spaces/atsantiago/Monocular_Depth_Filter.
![](https://i.imgur.com/8rMHWmZ.png)
# Additional dependencies that are required for this change
For mask selection by monocular depth estimation tensorflow is required
and the model must be cloned to ./src/monocular_depth_estimation/
Changes in environment.yaml:
- einops>=0.3.0
- tensorflow>=2.10.0
Einops must be allowed to be newer for tensorflow to work.
# Checklist:
- [x] I have changed the base branch to `dev`
- [x] I have performed a self-review of my own code
- [x] I have commented my code in hard-to-understand areas
- [x] I have made corresponding changes to the documentation
Co-authored-by: hlky <106811348+hlky@users.noreply.github.com>
2022-10-02 20:23:37 +03:00
|
|
|
|