Amd gpu docker (#1366)

# Description
AMD support can be built by prefixing your compose commands with 
`docker compose -f docker-compose.yml -f docker-compose.amd.yml ...`

or, by setting 
```
export COMPOSE_FILE=docker-compose.yml:docker-compose.amd.yml
```
in your `.profile` or through a tool like `direnv`


Closes: Discord #installation-packing:AMD, at least for Linux hosts

# Checklist:

- [x] I have changed the base branch to `dev`
- [x] I have performed a self-review of my own code
- [x] I have commented my code in hard-to-understand areas
- [x] I have made corresponding changes to the documentation
This commit is contained in:
Travis Fletcher 2022-09-30 18:42:35 -04:00 committed by GitHub
parent b66cb302ce
commit dff2d6e60f
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
6 changed files with 42 additions and 10 deletions

View File

@ -14,12 +14,17 @@
# You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
# Assumes host environment is AMD64 architecture
# ARG TARGETPLATFORM
# We should use the Pytorch CUDA/GPU-enabled base image. See: https://hub.docker.com/r/pytorch/pytorch/tags
# FROM nvidia/cuda:11.3.1-runtime-ubuntu20.04
# This is used to allow building against AMD GPUs
# Annoyingly, you can't IF branch off of, say, TARGETGPU and set
# the Dockerfile's FROM based on that, so we have to have the user
# pass in the entire image path for us.
ARG PYTORCH_IMAGE=pytorch/pytorch:1.12.1-cuda11.3-cudnn8-runtime
# To build against AMD, use
# --build-arg PYTORCH_IMAGE=rocm/pytorch:rocm5.2.3_ubuntu20.04_py3.7_pytorch_1.12.1
# Assumes AMD64 host architecture
FROM pytorch/pytorch:1.12.1-cuda11.3-cudnn8-runtime
FROM ${PYTORCH_IMAGE}
WORKDIR /install

8
docker-compose.amd.yml Normal file
View File

@ -0,0 +1,8 @@
services:
stable-diffusion:
build:
args:
PYTORCH_IMAGE: rocm/pytorch:rocm5.2.3_ubuntu20.04_py3.7_pytorch_1.12.1
devices:
- /dev/dri
- /dev/kfd

View File

@ -0,0 +1,10 @@
# Nvidia specific config
version: '3.3'
services:
stable-diffusion:
deploy:
resources:
reservations:
devices:
- capabilities: [ gpu ]

View File

@ -28,6 +28,7 @@ services:
- .:/sd
- ./outputs:/sd/outputs
- ./model_cache:/sd/model_cache
- ~/.huggingface/token:/root/.huggingface/token
- root_profile:/root
ports:
- '7860:7860'
@ -37,11 +38,6 @@ services:
interval: 30s
timeout: 1s
retries: 10
deploy:
resources:
reservations:
devices:
- capabilities: [ gpu ]
volumes:
root_profile:

View File

@ -43,6 +43,19 @@ Other Notes:
* "Optional" packages commonly used with Stable Diffusion WebUI workflows such as, RealESRGAN, GFPGAN, will be installed by default.
* An older version of running Stable Diffusion WebUI using Docker exists here: https://github.com/sd-webui/stable-diffusion-webui/discussions/922
### But what about AMD?
There is tentative support for AMD GPUs through docker which can be enabled via `docker-compose.amd.yml`,
although this is still in the early stages. Right now, this _only_ works on native linux (not WSL2) due
to issues with AMDs support of GPU passthrough. You also _must_ have ROCm drivers installed on the host.
```
docker compose -f docker-compose.yml -f docker-compose.amd.yml ...
```
or, by setting
```
export COMPOSE_FILE=docker-compose.yml:docker-compose.amd.yml
```
in your `.profile` or through a tool like `direnv`
---

View File

@ -3,7 +3,7 @@
# Minimum Environment Dependencies for Stable Diffusion
#torch # already satisfied as 1.12.1 from base image
#torchvision # already satisfied as 0.13.1 from base image
#numpy==1.19.2 # already satisfied as 1.21.5 from base image
numpy==1.21.5 # already satisfied as 1.21.5 from base image
# Stable Diffusion (see: https://github.com/CompVis/stable-diffusion)