mirror of
https://github.com/Sygil-Dev/sygil-webui.git
synced 2024-12-16 16:12:30 +03:00
#1238 from moandcompany/dependency_audit_docker
Audit dependencies and revise Docker environment specification
This commit is contained in:
commit
6009b03752
@ -1,3 +1,4 @@
|
||||
models/
|
||||
outputs/
|
||||
src/
|
||||
gfpgan/
|
||||
|
34
Dockerfile
34
Dockerfile
@ -1,29 +1,39 @@
|
||||
FROM nvidia/cuda:11.3.1-runtime-ubuntu20.04
|
||||
# Assumes host environment is AMD64 architecture
|
||||
|
||||
ENV DEBIAN_FRONTEND=noninteractive \
|
||||
PYTHONUNBUFFERED=1 \
|
||||
PYTHONIOENCODING=UTF-8 \
|
||||
CONDA_DIR=/opt/conda
|
||||
# We should use the Pytorch CUDA/GPU-enabled base image. See: https://hub.docker.com/r/pytorch/pytorch/tags
|
||||
# FROM nvidia/cuda:11.3.1-runtime-ubuntu20.04
|
||||
|
||||
WORKDIR /sd
|
||||
# Assumes AMD64 host architecture
|
||||
FROM pytorch/pytorch:1.12.1-cuda11.3-cudnn8-runtime
|
||||
|
||||
WORKDIR /install
|
||||
|
||||
SHELL ["/bin/bash", "-c"]
|
||||
|
||||
RUN apt-get update && \
|
||||
apt-get install -y libglib2.0-0 wget && \
|
||||
apt-get install -y wget git && \
|
||||
apt-get clean && \
|
||||
rm -rf /var/lib/apt/lists/*
|
||||
|
||||
# Install miniconda
|
||||
RUN wget -O ~/miniconda.sh -q --show-progress --progress=bar:force https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh && \
|
||||
/bin/bash ~/miniconda.sh -b -p $CONDA_DIR && \
|
||||
rm ~/miniconda.sh
|
||||
ENV PATH=$CONDA_DIR/bin:$PATH
|
||||
COPY ./sd_requirements.txt /install/
|
||||
RUN pip install -r /install/sd_requirements.txt
|
||||
|
||||
COPY ./requirements.txt /install/
|
||||
RUN pip install -r /install/requirements.txt
|
||||
|
||||
COPY ./ext_requirements.txt /install
|
||||
RUN pip install -r /install/ext_requirements.txt
|
||||
|
||||
COPY ./ui_requirements.txt /install/
|
||||
RUN pip install -r /install/ui_requirements.txt
|
||||
|
||||
# Install font for prompt matrix
|
||||
COPY /data/DejaVuSans.ttf /usr/share/fonts/truetype/
|
||||
|
||||
ENV PYTHONPATH=/sd
|
||||
|
||||
EXPOSE 7860 8501
|
||||
|
||||
COPY ./entrypoint.sh /sd/
|
||||
ENTRYPOINT /sd/entrypoint.sh
|
||||
|
||||
|
132
README_Docker.md
Normal file
132
README_Docker.md
Normal file
@ -0,0 +1,132 @@
|
||||
# Running Stable Diffusion WebUI Using Docker
|
||||
|
||||
This Docker environment is intended to speed up development and testing of Stable Diffusion WebUI features. Use of a container image format allows for packaging and isolation of Stable Diffusion / WebUI's dependencies separate from the Host environment.
|
||||
|
||||
You can use this Dockerfile to build a Docker image and run Stable Diffusion WebUI locally.
|
||||
|
||||
|
||||
Requirements:
|
||||
* Host computer is AMD64 architecture (e.g. Intel/AMD x86 64-bit CPUs)
|
||||
* Host computer operating system (Linux or Windows with WSL2 enabled)
|
||||
* See [Microsoft WSL2 Installation Guide for Windows 10] (https://learn.microsoft.com/en-us/windows/wsl/) for more information on installing.
|
||||
* Ubuntu (Default) for WSL2 is recommended for Windows users
|
||||
* Host computer has Docker, or compatible container runtime
|
||||
* Docker Compose (v1.29+) or later
|
||||
* See [Install Docker Engine] (https://docs.docker.com/engine/install/#supported-platforms) to learn more about installing Docker on your Linux operating system
|
||||
* 10+ GB Free Disk Space (used by Docker base image, the Stable Diffusion WebUI Docker image for dependencies, model files/weights)
|
||||
|
||||
Additional Requirements:
|
||||
* Host computer is equipped with a CUDA-compatible GPU (e.g. Nvidia RTX 2xxx, 3000x)
|
||||
* NVIDIA Container Toolkit is installed
|
||||
* See [NVIDIA Container Toolkit Installation Guide] (https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/install-guide.html#supported-platforms "Official NVIDIA Installation Guide") for more information on installing.
|
||||
|
||||
Other Notes:
|
||||
* "Optional" packages commonly used with Stable Diffusion WebUI workflows such as, RealESRGAN, GFPGAN, will be installed by default.
|
||||
* An older version of running Stable Diffusion WebUI using Docker exists here: https://github.com/sd-webui/stable-diffusion-webui/discussions/922
|
||||
|
||||
|
||||
---
|
||||
|
||||
## First-Time Startup Instructions
|
||||
|
||||
### Clone Repository and Build the Container Image
|
||||
* Clone this repository to your host machine: `git clone https://github.com/sd-webui/stable-diffusion-webui.git`
|
||||
* Change directories to your copy of the repository and run the Docker Image build script:
|
||||
* `cd https://github.com/sd-webui/stable-diffusion-webui.git`
|
||||
* `./build_docker.sh`
|
||||
* The build process will take several minutes to complete
|
||||
* After the image build has completed, you will have a docker image for running the Stable Diffusion WebUI named `stable-diffusion-webui:dev`
|
||||
* **The `stable-diffusion-webui:dev` Docker image will contain all software dependencies required to run Stable Diffusion and the Stable Diffusion WebUI; model files (i.e. weights/checkpoints) are stored separate from the Docker image. (Note: with this implementation, the Stable Diffusion WebUI code is also stored separate from the Docker image)**
|
||||
|
||||
* If you plan to use Docker Compose to run the image in a container (most users), create an `.env_docker` file using the example file:
|
||||
* `cp .env_docker.example .env_docker`
|
||||
* Edit `.env_docker` using the text editor of your choice.
|
||||
* Options available in `.env_docker` allow you to control automatic model file checking/download during startup, and to select the Stable Diffusion WebUI implementation to run (Gradio vs Streamlit). **You must set the `VALIDATE_MODELS` option to `true` during the first run to download necessary model weights/checkpoints.** You may the set `VALIDATE_MODELS` option to `false` on future runs to speed up startup time.
|
||||
|
||||
|
||||
### Create a Container Instance Using Docker Compose
|
||||
During the first run of the image, several files will be downloaded automatically including model weights/checkpoints. After downloading, these files will be cached locally to speed up future runs.
|
||||
|
||||
The default `docker-compose.yml` file will create a Docker container instance named `sd-webui`
|
||||
|
||||
* Create an instance of the Stable Diffusion WebUI image as a Docker container:
|
||||
* `docker compose up`
|
||||
|
||||
(Optional) Daemon mode:
|
||||
* Note you can start the container in "daemon" mode by applying the `-d` option: `docker compose up -d`
|
||||
* When running in daemon mode, you can view logging output from your container by running `docker logs sd-webui`
|
||||
|
||||
(Note: Depending on your version of Docker/Docker Compose installed, the command may be `docker-compose` (older versions) or `docker compose` (newer versions))
|
||||
|
||||
The container may take several minutes to start up if model weights/checkpoints need to be downloaded.
|
||||
|
||||
|
||||
### Accessing your Stable Diffusion WebUI Instance
|
||||
Depending on the WebUI implementation you have selected, you can access the WebUI at the following URLs:
|
||||
|
||||
* Gradio: http://localhost:7860
|
||||
* Streamlit: http://localhost:8501
|
||||
|
||||
You can expose and access your WebUI to/from remote hosts by the machine's IP address:
|
||||
(note: This generally does not apply to Windows/WSL2 users due to WSL's implementation)
|
||||
* Gradio: http://<host-ip-address>:7860
|
||||
* Streamlit: http://<host-ip-address>:8501
|
||||
|
||||
|
||||
### Where is ___ stored?
|
||||
|
||||
By default, model weights/checkpoint files will be stored at the following path:
|
||||
* `./model_cache/`
|
||||
|
||||
Output files generated by Stable Diffusion will be stored at the following path:
|
||||
* `./output/`
|
||||
|
||||
The above paths will be accessible directly from your Docker container's host.
|
||||
|
||||
|
||||
### Shutting down your Docker container
|
||||
You can stop your Docker container by pressing the `CTRL+C` key combination in the terminal where the container was started..
|
||||
|
||||
If you started the container using `docker compose`, you can stop the container with the command:
|
||||
* `docker compose down`
|
||||
|
||||
Using the default configuration, your Stable Diffusion output, cached model weights/files, etc will persist between Docker container starts.
|
||||
|
||||
---
|
||||
|
||||
## Resetting your Docker environment
|
||||
Should you need to do so, the included `docker-reset.sh` script will remove all docker images, stopped containers, and cached model weights/checkpoints.
|
||||
|
||||
You will need to re-download all associated model files/weights used by Stable Diffusion WebUI, which total to several gigabytes of data. This will occur automatically upon the next startup.
|
||||
|
||||
|
||||
## Misc Related How-to
|
||||
* You can obtain shell access to a running Stable Diffusion WebUI container started with Docker Compose with the following command:
|
||||
* `docker exec -it st-webui /bin/bash`
|
||||
* To start a container using the Stable Diffusion WebUI Docker image without Docker Compose, you can do so with the following command:
|
||||
* `docker run --rm -it --entrypoint /bin/bash stable-diffusion-webui:dev`
|
||||
* To start a container, with mapped ports, GPU resource access, and a local directory bound as a container volume, you can do so with the following command:
|
||||
* `docker run --rm -it -p 8501:8501 -p 7860:7860 --gpus all -v $(pwd):/sd --entrypoint /bin/bash stable-diffusion-webui:dev`
|
||||
|
||||
---
|
||||
|
||||
## Dockerfile Implementation Notes
|
||||
Compared to base Stable Diffusion distribution, Conda-based package management was removed.
|
||||
|
||||
The Pytorch base image with Nvidia CUDA support is used as the base Docker image to simplify dependencies.
|
||||
|
||||
Python dependency requirements for various packages used by Stable Diffusion WebUI have been separated into different groups. During the container image build process, requirements are installed in the following order:
|
||||
|
||||
1. Stable Diffusion (core) requirements (`sd_requirements.txt`)
|
||||
2. General Requirements (`requirements.txt`)
|
||||
3. External optional packages requirements (`ext_requirements.txt`)
|
||||
4. WebUI requirements (`ui_requirements.txt`)
|
||||
|
||||
Python package dependencies have been version-pinned where possible.
|
||||
|
||||
**Developers: When developing new features or making changes to the environment that require dependency changes, please update and make notes in the appropriate file to help us better track and manage dependencies.**
|
||||
|
||||
### Other Notes
|
||||
|
||||
* The `root_profile` Docker Volume
|
||||
* The `huggingface/transformers` package will download files to a cache located at `/root/.cache/huggingface/transformers` totalling nearly ~1.6 GB
|
2
build_docker.sh
Executable file
2
build_docker.sh
Executable file
@ -0,0 +1,2 @@
|
||||
#!/bin/sh
|
||||
docker build . -t stable-diffusion-webui:dev
|
@ -3,17 +3,14 @@ version: '3.3'
|
||||
services:
|
||||
stable-diffusion:
|
||||
container_name: sd-webui
|
||||
build:
|
||||
context: .
|
||||
dockerfile: Dockerfile
|
||||
image: stable-diffusion-webui:dev
|
||||
|
||||
env_file: .env_docker
|
||||
environment:
|
||||
PIP_EXISTS_ACTION: w
|
||||
|
||||
volumes:
|
||||
- .:/sd
|
||||
- ./outputs:/sd/outputs
|
||||
- ./model_cache:/sd/model_cache
|
||||
- conda_env:/opt/conda
|
||||
- root_profile:/root
|
||||
ports:
|
||||
- '7860:7860'
|
||||
@ -25,5 +22,4 @@ services:
|
||||
- capabilities: [ gpu ]
|
||||
|
||||
volumes:
|
||||
conda_env:
|
||||
root_profile:
|
||||
|
@ -1,23 +1,32 @@
|
||||
#!/bin/bash
|
||||
# It basically resets you to the beginning except for your output directory.
|
||||
# How to:
|
||||
# cd stable-diffusion
|
||||
# ./docker-reset.sh
|
||||
# Then:
|
||||
# docker-compose up
|
||||
# Use this script to reset your Docker-based Stable Diffusion environment
|
||||
# This script will remove all cached files/models that are downloaded during your first startup
|
||||
|
||||
|
||||
declare -a deletion_paths=("src"
|
||||
"gfpgan"
|
||||
"sd_webui.egg-info"
|
||||
".env_updated" # Check if still needed
|
||||
)
|
||||
|
||||
|
||||
# TODO This should be improved to be safer
|
||||
install_dir=$(pwd)
|
||||
|
||||
echo $install_dir
|
||||
read -p "Do you want to reset the above directory? (y/n) " -n 1 DIRCONFIRM
|
||||
echo ""
|
||||
|
||||
echo $(pwd)
|
||||
read -p "Is the directory above correct to run reset on? (y/n) " -n 1 DIRCONFIRM
|
||||
if [[ $DIRCONFIRM =~ ^[Yy]$ ]]; then
|
||||
docker compose down
|
||||
docker image rm stable-diffusion-webui_stable-diffusion:latest
|
||||
docker volume rm stable-diffusion-webui_conda_env
|
||||
docker image rm stable-diffusion-webui:dev
|
||||
docker volume rm stable-diffusion-webui_root_profile
|
||||
echo "Remove ./src"
|
||||
sudo rm -rf src
|
||||
sudo rm -rf gfpgan
|
||||
sudo rm -rf sd_webui.egg-info
|
||||
sudo rm .env_updated
|
||||
|
||||
for path in "${deletion_paths[@]}"
|
||||
do
|
||||
echo "Removing files located at path: $install_dir/$path"
|
||||
rm -rf $path
|
||||
done
|
||||
else
|
||||
echo "Exited without resetting"
|
||||
echo "Exited without reset"
|
||||
fi
|
||||
|
@ -1,6 +1,6 @@
|
||||
#!/bin/bash
|
||||
#
|
||||
# Starts the gui inside the docker container using the conda env
|
||||
# Starts the webserver inside the docker container
|
||||
#
|
||||
|
||||
# set -x
|
||||
@ -24,39 +24,6 @@ MODEL_FILES=(
|
||||
'model.ckpt src/latent-diffusion/experiments/pretrained_models https://heibox.uni-heidelberg.de/f/578df07c8fc04ffbadf3/?dl=1 c209caecac2f97b4bb8f4d726b70ac2ac9b35904b7fc99801e1f5e61f9210c13'
|
||||
)
|
||||
|
||||
# Conda environment installs/updates
|
||||
# @see https://github.com/ContinuumIO/docker-images/issues/89#issuecomment-467287039
|
||||
ENV_NAME="ldm"
|
||||
ENV_FILE="${SCRIPT_DIR}/environment.yaml"
|
||||
ENV_UPDATED=0
|
||||
ENV_MODIFIED=$(date -r $ENV_FILE "+%s")
|
||||
ENV_MODIFED_FILE="${SCRIPT_DIR}/.env_updated"
|
||||
if [[ -f $ENV_MODIFED_FILE ]]; then ENV_MODIFIED_CACHED=$(<${ENV_MODIFED_FILE}); else ENV_MODIFIED_CACHED=0; fi
|
||||
export PIP_EXISTS_ACTION=w
|
||||
|
||||
# Create/update conda env if needed
|
||||
if ! conda env list | grep ".*${ENV_NAME}.*" >/dev/null 2>&1; then
|
||||
echo "Could not find conda env: ${ENV_NAME} ... creating ..."
|
||||
conda env create -f $ENV_FILE
|
||||
echo "source activate ${ENV_NAME}" > /root/.bashrc
|
||||
ENV_UPDATED=1
|
||||
elif [[ ! -z $CONDA_FORCE_UPDATE && $CONDA_FORCE_UPDATE == "true" ]] || (( $ENV_MODIFIED > $ENV_MODIFIED_CACHED )); then
|
||||
echo "Updating conda env: ${ENV_NAME} ..."
|
||||
conda env update --file $ENV_FILE --prune
|
||||
ENV_UPDATED=1
|
||||
fi
|
||||
|
||||
# Clear artifacts from conda after create/update
|
||||
# @see https://docs.conda.io/projects/conda/en/latest/commands/clean.html
|
||||
if (( $ENV_UPDATED > 0 )); then
|
||||
conda clean --all
|
||||
echo -n $ENV_MODIFIED > $ENV_MODIFED_FILE
|
||||
fi
|
||||
|
||||
# activate conda env
|
||||
. /opt/conda/etc/profile.d/conda.sh
|
||||
conda activate $ENV_NAME
|
||||
conda info | grep active
|
||||
|
||||
# Function to checks for valid hash for model files and download/replaces if invalid or does not exist
|
||||
validateDownloadModel() {
|
||||
@ -89,23 +56,29 @@ validateDownloadModel() {
|
||||
fi
|
||||
}
|
||||
|
||||
# Validate model files
|
||||
echo "Validating model files..."
|
||||
for models in "${MODEL_FILES[@]}"; do
|
||||
model=($models)
|
||||
if [[ ! -e ${model[1]}/${model[0]} || ! -L ${model[1]}/${model[0]} || -z $VALIDATE_MODELS || $VALIDATE_MODELS == "true" ]]; then
|
||||
validateDownloadModel ${model[0]} ${model[1]} ${model[2]} ${model[3]}
|
||||
fi
|
||||
done
|
||||
|
||||
# Launch web gui
|
||||
# Validate model files
|
||||
if [ $VALIDATE_MODELS == "false" ]; then
|
||||
echo "Skipping model file validation..."
|
||||
else
|
||||
echo "Validating model files..."
|
||||
for models in "${MODEL_FILES[@]}"; do
|
||||
model=($models)
|
||||
if [[ ! -e ${model[1]}/${model[0]} || ! -L ${model[1]}/${model[0]} || -z $VALIDATE_MODELS || $VALIDATE_MODELS == "true" ]]; then
|
||||
validateDownloadModel ${model[0]} ${model[1]} ${model[2]} ${model[3]}
|
||||
fi
|
||||
done
|
||||
fi
|
||||
|
||||
# Determine which webserver interface to launch (Streamlit vs Default: Gradio)
|
||||
if [[ ! -z $WEBUI_SCRIPT && $WEBUI_SCRIPT == "webui_streamlit.py" ]]; then
|
||||
launch_command="streamlit run scripts/${WEBUI_SCRIPT:-webui.py} $WEBUI_ARGS"
|
||||
else
|
||||
launch_command="python scripts/${WEBUI_SCRIPT:-webui.py} $WEBUI_ARGS"
|
||||
fi
|
||||
|
||||
launch_message="entrypoint.sh: Run ${launch_command}..."
|
||||
# Start webserver interface
|
||||
launch_message="Starting Stable Diffusion WebUI... ${launch_command}..."
|
||||
if [[ -z $WEBUI_RELAUNCH || $WEBUI_RELAUNCH == "true" ]]; then
|
||||
n=0
|
||||
while true; do
|
||||
|
15
ext_requirements.txt
Normal file
15
ext_requirements.txt
Normal file
@ -0,0 +1,15 @@
|
||||
# Optional packages commonly used with Stable Diffusion workflow
|
||||
|
||||
# Upscalers
|
||||
basicsr==1.4.2 # required by RealESRGAN
|
||||
gfpgan==1.3.8 # GFPGAN
|
||||
realesrgan==0.2.8 # RealESRGAN brings in GFPGAN as a requirement
|
||||
-e git+https://github.com/devilismyfriend/latent-diffusion#egg=latent-diffusion #ldsr
|
||||
|
||||
|
||||
# Orphaned Packages: No usage found
|
||||
#albumentations
|
||||
#imageio-ffmpeg
|
||||
#pudb
|
||||
#test-tube
|
||||
#torch-fidelity
|
16
requirements.txt
Normal file
16
requirements.txt
Normal file
@ -0,0 +1,16 @@
|
||||
# Additional Stable Diffusion Requirements
|
||||
# TODO: Pin external dependency versions
|
||||
|
||||
#opencv-python==4.6.0.66 # Opencv python already satisfied upstream
|
||||
opencv-python-headless==4.6.0.66 # Needed to operate opencv in headless/server mode
|
||||
|
||||
|
||||
taming-transformers-rom1504==0.0.6 # required by ldm
|
||||
# See: https://github.com/CompVis/taming-transformers/issues/176
|
||||
# -e git+https://github.com/CompVis/taming-transformers.git@master#egg=taming-transformers # required by ldm
|
||||
# Note: taming package needs to be installed with -e option
|
||||
|
||||
|
||||
git+https://github.com/crowsonkb/k-diffusion.git
|
||||
# Note: K-diffusion brings in CLIP 1.0 as a dependency automatically; will create a dependency resolution conflict when explicitly specified together
|
||||
# git+https://github.com/openai/CLIP.git@main#egg=clip
|
13
sd_requirements.txt
Normal file
13
sd_requirements.txt
Normal file
@ -0,0 +1,13 @@
|
||||
# Core Stable Diffusion Dependencies
|
||||
|
||||
# Minimum Environment Dependencies for Stable Diffusion
|
||||
#torch # already satisfied as 1.12.1 from base image
|
||||
#torchvision # already satisfied as 0.13.1 from base image
|
||||
#numpy==1.19.2 # already satisfied as 1.21.5 from base image
|
||||
|
||||
|
||||
# Stable Diffusion (see: https://github.com/CompVis/stable-diffusion)
|
||||
transformers==4.21.1
|
||||
diffusers==0.3.0
|
||||
invisible-watermark==0.1.5
|
||||
pytorch_lightning==1.7.6
|
24
ui_requirements.txt
Normal file
24
ui_requirements.txt
Normal file
@ -0,0 +1,24 @@
|
||||
# Dependencies required for Stable Diffusion UI
|
||||
pynvml==11.4.1
|
||||
omegaconf==2.2.3
|
||||
|
||||
Jinja2==3.1.2 # Jinja2 is required by Gradio
|
||||
# Note: Jinja2 3.x major version required due to breaking changes found in markupsafe==2.1.1; 2.0.1 is incompatible with other upstream dependencies
|
||||
# see https://github.com/pallets/markupsafe/issues/304
|
||||
|
||||
|
||||
# Environment Dependencies for WebUI (gradio)
|
||||
gradio==3.3.1
|
||||
|
||||
|
||||
# Environment Dependencies for WebUI (streamlit)
|
||||
streamlit==1.12.2
|
||||
streamlit-on-Hover-tabs==1.0.1
|
||||
streamlit-option-menu==0.3.2
|
||||
streamlit_nested_layout==0.1.1
|
||||
|
||||
|
||||
# Other
|
||||
retry==0.9.2 # used by sdutils
|
||||
python-slugify==6.1.2 # used by sdutils
|
||||
piexif==1.1.3 # used by sdutils
|
Loading…
Reference in New Issue
Block a user