We are hosting a competition where the community can showcase their most inventive use of textual inversion concepts in text-to-image or text-to-video.
Our compute cluster; `Nataili`, currently comprises of 3 nodes, two have 3090, the other has 2 x A5000.
We estimate `Nataili` can handle 12 concepts per hour, and we can add more workers if there is high demand.
Hopefully demand will be high, we want to train **hundreds** of new concepts!
For this event the theme is “The Sims: Stable Diffusion edition”.
So we have selected a subset of 1000 products from Amazon Berkely Objects dataset.
You can browse them here (link will be here).
Each product has images from multiple angles, the train concept command accepts up to 10 images, so choose the angles and modify backgrounds, experiment!
The goal with this category is to generate an image using the trained object, and the other categories apply, your imagination is the only limit! style a couch, try to make a BIG couch, try to make a couch on top of a mountain, try to make a vaporwave couch, anything!
The discord bot will give you a link to a `.zip` file, download this, extract it, and put the folder in `stable-diffusion-webui/models/custom/sd-concepts-library`