added cloudflared tunnel.

This commit is contained in:
aedh carrick 2023-09-12 10:29:03 -05:00
parent 5432b73fb8
commit 70e6d12451

View File

@ -49,7 +49,7 @@
"\n", "\n",
"## Installation instructions for:\n", "## Installation instructions for:\n",
"\n", "\n",
"- **[Windows](https://sygil-dev.github.io/sygil-webui/docs/1.windows-installation.html)** \n", "- **[Windows](https://sygil-dev.github.io/sygil-webui/docs/1.windows-installation.html)**\n",
"- **[Linux](https://sygil-dev.github.io/sygil-webui/docs/2.linux-installation.html)**\n", "- **[Linux](https://sygil-dev.github.io/sygil-webui/docs/2.linux-installation.html)**\n",
"\n", "\n",
"### Want to ask a question or request a feature?\n", "### Want to ask a question or request a feature?\n",
@ -172,7 +172,7 @@
"\n", "\n",
"If you want to use GFPGAN to improve generated faces, you need to install it separately.\n", "If you want to use GFPGAN to improve generated faces, you need to install it separately.\n",
"Download [GFPGANv1.4.pth](https://github.com/TencentARC/GFPGAN/releases/download/v1.3.4/GFPGANv1.4.pth) and put it\n", "Download [GFPGANv1.4.pth](https://github.com/TencentARC/GFPGAN/releases/download/v1.3.4/GFPGANv1.4.pth) and put it\n",
"into the `/sygil-webui/models/gfpgan` directory. \n", "into the `/sygil-webui/models/gfpgan` directory.\n",
"\n", "\n",
"### RealESRGAN\n", "### RealESRGAN\n",
"\n", "\n",
@ -182,7 +182,7 @@
"There is also a separate tab for using RealESRGAN on any picture.\n", "There is also a separate tab for using RealESRGAN on any picture.\n",
"\n", "\n",
"Download [RealESRGAN_x4plus.pth](https://github.com/xinntao/Real-ESRGAN/releases/download/v0.1.0/RealESRGAN_x4plus.pth) and [RealESRGAN_x4plus_anime_6B.pth](https://github.com/xinntao/Real-ESRGAN/releases/download/v0.2.2.4/RealESRGAN_x4plus_anime_6B.pth).\n", "Download [RealESRGAN_x4plus.pth](https://github.com/xinntao/Real-ESRGAN/releases/download/v0.1.0/RealESRGAN_x4plus.pth) and [RealESRGAN_x4plus_anime_6B.pth](https://github.com/xinntao/Real-ESRGAN/releases/download/v0.2.2.4/RealESRGAN_x4plus_anime_6B.pth).\n",
"Put them into the `sygil-webui/models/realesrgan` directory. \n", "Put them into the `sygil-webui/models/realesrgan` directory.\n",
"\n", "\n",
"\n", "\n",
"\n", "\n",
@ -219,8 +219,8 @@
"\n", "\n",
"[Stable Diffusion](#stable-diffusion-v1) is a latent text-to-image diffusion\n", "[Stable Diffusion](#stable-diffusion-v1) is a latent text-to-image diffusion\n",
"model.\n", "model.\n",
"Thanks to a generous compute donation from [Stability AI](https://stability.ai/) and support from [LAION](https://laion.ai/), we were able to train a Latent Diffusion Model on 512x512 images from a subset of the [LAION-5B](https://laion.ai/blog/laion-5b/) database. \n", "Thanks to a generous compute donation from [Stability AI](https://stability.ai/) and support from [LAION](https://laion.ai/), we were able to train a Latent Diffusion Model on 512x512 images from a subset of the [LAION-5B](https://laion.ai/blog/laion-5b/) database.\n",
"Similar to Google's [Imagen](https://arxiv.org/abs/2205.11487), \n", "Similar to Google's [Imagen](https://arxiv.org/abs/2205.11487),\n",
"this model uses a frozen CLIP ViT-L/14 text encoder to condition the model on text prompts.\n", "this model uses a frozen CLIP ViT-L/14 text encoder to condition the model on text prompts.\n",
"With its 860M UNet and 123M text encoder, the model is relatively lightweight and runs on a GPU with at least 10GB VRAM.\n", "With its 860M UNet and 123M text encoder, the model is relatively lightweight and runs on a GPU with at least 10GB VRAM.\n",
"See [this section](#stable-diffusion-v1) below and the [model card](https://huggingface.co/CompVis/stable-diffusion).\n", "See [this section](#stable-diffusion-v1) below and the [model card](https://huggingface.co/CompVis/stable-diffusion).\n",
@ -229,26 +229,26 @@
"\n", "\n",
"Stable Diffusion v1 refers to a specific configuration of the model\n", "Stable Diffusion v1 refers to a specific configuration of the model\n",
"architecture that uses a downsampling-factor 8 autoencoder with an 860M UNet\n", "architecture that uses a downsampling-factor 8 autoencoder with an 860M UNet\n",
"and CLIP ViT-L/14 text encoder for the diffusion model. The model was pretrained on 256x256 images and \n", "and CLIP ViT-L/14 text encoder for the diffusion model. The model was pretrained on 256x256 images and\n",
"then finetuned on 512x512 images.\n", "then finetuned on 512x512 images.\n",
"\n", "\n",
"*Note: Stable Diffusion v1 is a general text-to-image diffusion model and therefore mirrors biases and (mis-)conceptions that are present\n", "*Note: Stable Diffusion v1 is a general text-to-image diffusion model and therefore mirrors biases and (mis-)conceptions that are present\n",
"in its training data. \n", "in its training data.\n",
"Details on the training procedure and data, as well as the intended use of the model can be found in the corresponding [model card](https://huggingface.co/CompVis/stable-diffusion).\n", "Details on the training procedure and data, as well as the intended use of the model can be found in the corresponding [model card](https://huggingface.co/CompVis/stable-diffusion).\n",
"\n", "\n",
"## Comments\n", "## Comments\n",
"\n", "\n",
"- Our codebase for the diffusion models builds heavily on [OpenAI's ADM codebase](https://github.com/openai/guided-diffusion)\n", "- Our codebase for the diffusion models builds heavily on [OpenAI's ADM codebase](https://github.com/openai/guided-diffusion)\n",
" and [https://github.com/lucidrains/denoising-diffusion-pytorch](https://github.com/lucidrains/denoising-diffusion-pytorch). \n", " and [https://github.com/lucidrains/denoising-diffusion-pytorch](https://github.com/lucidrains/denoising-diffusion-pytorch).\n",
" Thanks for open-sourcing!\n", " Thanks for open-sourcing!\n",
"\n", "\n",
"- The implementation of the transformer encoder is from [x-transformers](https://github.com/lucidrains/x-transformers) by [lucidrains](https://github.com/lucidrains?tab=repositories). \n", "- The implementation of the transformer encoder is from [x-transformers](https://github.com/lucidrains/x-transformers) by [lucidrains](https://github.com/lucidrains?tab=repositories).\n",
"\n", "\n",
"## BibTeX\n", "## BibTeX\n",
"\n", "\n",
"```\n", "```\n",
"@misc{rombach2021highresolution,\n", "@misc{rombach2021highresolution,\n",
" title={High-Resolution Image Synthesis with Latent Diffusion Models}, \n", " title={High-Resolution Image Synthesis with Latent Diffusion Models},\n",
" author={Robin Rombach and Andreas Blattmann and Dominik Lorenz and Patrick Esser and Björn Ommer},\n", " author={Robin Rombach and Andreas Blattmann and Dominik Lorenz and Patrick Esser and Björn Ommer},\n",
" year={2021},\n", " year={2021},\n",
" eprint={2112.10752},\n", " eprint={2112.10752},\n",
@ -502,7 +502,7 @@
" file_url = file_info[file]['download_link']\n", " file_url = file_info[file]['download_link']\n",
" if 'save_location' in file_info[file]:\n", " if 'save_location' in file_info[file]:\n",
" file_path = file_info[file]['save_location']\n", " file_path = file_info[file]['save_location']\n",
" else: \n", " else:\n",
" file_path = models[model]['save_location']\n", " file_path = models[model]['save_location']\n",
" download_file(file_name, file_path, file_url)\n", " download_file(file_name, file_path, file_url)\n",
"\n", "\n",
@ -580,6 +580,54 @@
}, },
"execution_count": null, "execution_count": null,
"outputs": [] "outputs": []
},
{
"cell_type": "markdown",
"source": [
"Run Streamlit through cloudflare."
],
"metadata": {
"id": "QhazvrFG97zX"
}
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "jjjjjjjjjjjjjj"
},
"outputs": [],
"source": [
"#@title Run Streamlit through cloudflare.\n",
"!wget https://github.com/cloudflare/cloudflared/releases/latest/download/cloudflared-linux-amd64.deb\n",
"!dpkg -i cloudflared-linux-amd64.deb\n",
"\n",
"import subprocess\n",
"import threading\n",
"import time\n",
"import socket\n",
"import urllib.request\n",
"\n",
"def iframe_thread(port):\n",
" while True:\n",
" time.sleep(0.5)\n",
" sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)\n",
" result = sock.connect_ex(('127.0.0.1', port))\n",
" if result == 0:\n",
" break\n",
" sock.close()\n",
"\n",
" p = subprocess.Popen([\"cloudflared\", \"tunnel\", \"--url\", \"http://127.0.0.1:{}\".format(port)], stdout=subprocess.PIPE, stderr=subprocess.PIPE)\n",
" for line in p.stderr:\n",
" l = line.decode()\n",
" if \"trycloudflare.com \" in l:\n",
" print(\"This is the URL to access Sygil WebUI:\", l[l.find(\"http\"):], end='')\n",
"\n",
"\n",
"threading.Thread(target=iframe_thread, daemon=True, args=(8501)).start()\n",
"\n",
"!streamlit run scripts/webui_streamlit.py --theme.base dark --server.headless true"
]
} }
] ]
} }