catboxanon
3ce5fb8e5c
Add option for faster live interrupt
2023-08-17 20:03:26 -04:00
w-e-w
e1a29266b2
return empty list if extensions_dir not exist
2023-08-17 00:24:24 +09:00
w-e-w
0cf85b24df
auto add data-dir to gradio-allowed-path
2023-08-16 20:18:46 +09:00
AUTOMATIC1111
023a3a98a1
Merge pull request #12596 from AUTOMATIC1111/fix-taesd-scale
...
Remove wrong TAESD Latent scale
2023-08-16 09:56:12 +03:00
Kohaku-Blueleaf
d9ddc5d4cd
Remove wrong scale
2023-08-16 11:21:12 +08:00
AUTOMATIC1111
fd563e3274
Merge pull request #12586 from catboxanon/fix/rng-shape
...
RNG: Make all elements of shape `int`s
2023-08-15 21:47:02 +03:00
catboxanon
0f77139253
Fix inpaint upload for alpha masks, create reusable function
2023-08-15 14:24:55 -04:00
catboxanon
5b28b7dbc7
RNG: Make all elements of shape int
s
2023-08-15 13:38:37 -04:00
AUTOMATIC1111
f01682ee01
store patches for Lora in a specialized module
2023-08-15 19:23:40 +03:00
AUTOMATIC1111
7327be97aa
Merge pull request #12570 from NoCrypt/add-miku-theme
...
Add NoCrypt/miku gradio theme
2023-08-15 16:31:12 +03:00
brkirch
54209c1639
Use the new SD VAE override setting
2023-08-15 06:29:39 -04:00
AUTOMATIC1111
bc61ad9ec8
Merge pull request #12564 from catboxanon/feat/img2img-noise
...
Add extra noise param for img2img operations
2023-08-15 09:50:20 +03:00
NoCrypt
b0a6d61d73
Add NoCrypt/miku gradio theme
2023-08-15 13:22:44 +07:00
catboxanon
371b24b17c
Add extra img2img noise
2023-08-15 02:19:19 -04:00
AUTOMATIC1111
79d4e81984
fix processing error that happens if batch_size is not a multiple of how many prompts/negative prompts there are #12509
2023-08-15 08:46:17 +03:00
AUTOMATIC1111
7e77a38cbc
get XYZ plot to work with recent changes to refined specified in fields of p rather than in settings
2023-08-15 08:27:50 +03:00
AUTOMATIC1111
6f86573247
Merge pull request #12552 from brkirch/update-sdxl-commit-hash
...
Update SD XL commit hash
2023-08-15 08:12:21 +03:00
AUTOMATIC1111
45be87afc6
correctly add Eta DDIM to infotext when it's 1.0 and do not add it when it's 0.0.
2023-08-14 21:48:05 +03:00
AUTOMATIC1111
f23e5ce2da
revert changed inpainting mask conditioning calculation after #12311
2023-08-14 17:59:03 +03:00
AUTOMATIC1111
e56b7c8419
Merge pull request #12547 from whitebell/fix-typo
...
Fix typo in shared_options.py
2023-08-14 13:36:10 +03:00
brkirch
bc63339df3
Update hash for SD XL Repo
2023-08-14 06:26:36 -04:00
AUTOMATIC1111
6bfd4dfecf
add second_order to samplers that mistakenly didn't have it
2023-08-14 12:07:38 +03:00
AUTOMATIC1111
353c876172
fix API always using -1 as seed
2023-08-14 10:43:18 +03:00
AUTOMATIC1111
f3b96d4998
return seed controls UI to how it was before
2023-08-14 10:22:52 +03:00
AUTOMATIC1111
abbecb3e73
further repair the /docs page to not break styles with the attempted fix
2023-08-14 10:15:10 +03:00
whitebell
b39d9364d8
Fix typo in shared_options.py
...
unperdictable -> unpredictable
2023-08-14 15:58:38 +09:00
AUTOMATIC1111
c7c16f805c
repair /docs page
2023-08-14 09:49:51 +03:00
AUTOMATIC1111
f37cc5f5e1
Merge pull request #12542 from AUTOMATIC1111/res-sampler
...
Add RES sampler and reorder the sampler list
2023-08-14 09:02:10 +03:00
AUTOMATIC1111
c1a31ec9f7
revert to applying mask before denoising for k-diffusion, like it was before
2023-08-14 08:59:15 +03:00
Kohaku-Blueleaf
aa26f8eb40
Put frequently used sampler back
2023-08-14 13:50:53 +08:00
AUTOMATIC1111
cda2f0a162
make on_before_component/on_after_component possible earlier
2023-08-14 08:49:39 +03:00
AUTOMATIC1111
aeb76ef174
repair DDIM/PLMS/UniPC batches
2023-08-14 08:49:02 +03:00
Kohaku-Blueleaf
0ea61a74be
add res(dpmdd 2m sde heun) and reorder the sampler list
2023-08-14 11:46:36 +08:00
AUTOMATIC1111
007ecfbb29
also use setup callback for the refiner instead of before_process
2023-08-13 21:01:13 +03:00
AUTOMATIC1111
9cd0475c08
Merge pull request #12526 from brkirch/mps-adjust-sub-quad
...
Fixes for `git checkout`, MPS/macOS fixes and optimizations
2023-08-13 20:28:49 +03:00
AUTOMATIC1111
8452708560
Merge pull request #12530 from eltociear/eltociear-patch-1
...
Fix typo in launch_utils.py
2023-08-13 20:27:17 +03:00
AUTOMATIC1111
16781ba09a
fix 2 for git code botched by previous PRs
2023-08-13 20:15:20 +03:00
Ikko Eltociear Ashimine
09ff5b5416
Fix typo in launch_utils.py
...
existance -> existence
2023-08-14 01:03:49 +09:00
AUTOMATIC1111
f093c9d39d
fix broken XYZ plot seeds
...
add new callback for scripts to be used before processing
2023-08-13 17:31:10 +03:00
brkirch
2035cbbd5d
Fix DDIM and PLMS samplers on MPS
2023-08-13 10:07:52 -04:00
brkirch
5df535b7c2
Remove duplicate code for torchsde randn
2023-08-13 10:07:52 -04:00
brkirch
f4dbb0c820
Change the repositories origin URLs when necessary
2023-08-13 10:07:52 -04:00
brkirch
9058620cec
git checkout
with commit hash
2023-08-13 10:07:14 -04:00
brkirch
2489252099
torch.empty
can create issues; use torch.zeros
...
For MPS, using a tensor created with `torch.empty()` can cause `torch.baddbmm()` to include NaNs in the tensor it returns, even though `beta=0`. However, with a tensor of shape [1,1,1], there should be a negligible performance difference between `torch.empty()` and `torch.zeros()` anyway, so it's better to just use `torch.zeros()` for this and avoid unnecessarily creating issues.
2023-08-13 10:06:25 -04:00
brkirch
87dd685224
Make sub-quadratic the default for MPS
2023-08-13 10:06:25 -04:00
brkirch
abfa4ad8bc
Use fixed size for sub-quadratic chunking on MPS
...
Even if this causes chunks to be much smaller, performance isn't significantly impacted. This will usually reduce memory usage but should also help with poor performance when free memory is low.
2023-08-13 10:06:25 -04:00
AUTOMATIC1111
3163d1269a
fix for the broken run_git calls
2023-08-13 16:51:21 +03:00
AUTOMATIC1111
1c6ca09992
Merge pull request #12510 from catboxanon/feat/extnet/hashes
...
Support search and display of hashes for all extra network items
2023-08-13 16:46:32 +03:00
AUTOMATIC1111
d73db17ee3
Merge pull request #12515 from catboxanon/fix/gc1
...
Clear sampler and garbage collect before decoding images to reduce VRAM
2023-08-13 16:45:38 +03:00
AUTOMATIC1111
127ab9114f
Merge pull request #12514 from catboxanon/feat/batch-encode
...
Encode batch items individually to significantly reduce VRAM
2023-08-13 16:41:07 +03:00