fix endless sleep with optimized and batchsize (#384)

currently will endless loop in the while: time.sleep(1) part on 2nd iteration of for loop.
move back to cpu should only be ran once afterwards not [batchsize] times.

Co-authored-by: hlky <106811348+hlky@users.noreply.github.com>
This commit is contained in:
willlllllio 2022-08-31 20:37:27 +02:00 committed by GitHub
parent c00233220e
commit fe746ce7c1
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23

View File

@ -901,11 +901,11 @@ skip_grid, sort_samples, sampler_name, ddim_eta, n_iter, batch_size, i, denoisin
if simple_templating:
grid_captions.append( captions[i] )
if opt.optimized:
mem = torch.cuda.memory_allocated()/1e6
modelFS.to("cpu")
while(torch.cuda.memory_allocated()/1e6 >= mem):
time.sleep(1)
if opt.optimized:
mem = torch.cuda.memory_allocated()/1e6
modelFS.to("cpu")
while(torch.cuda.memory_allocated()/1e6 >= mem):
time.sleep(1)
if (prompt_matrix or not skip_grid) and not do_not_save_grid:
if prompt_matrix: