MrDeepFakes Forums

Some content may not be available to Guests. Consider registering an account to enjoy unrestricted access to guides, support and tools

  • We are looking for community members who are intested in helping out. See our HELP WANTED post.

Errors with older AMD OpenCL 1.2 cards, any advice

FreakoNature

DF Vagrant
I had experienced a similar issue on a Radeon 5450 that I was using for testing purposes. (A card that meets the minimum requirements of OPENCL and RAM)

I have tested a second card, the R7 350x which is really just a rebranded HD 8570 with 4GB of VRAM. I would have thought it wouldn't have memory issues but it seems as though it does as well.

Does anybody know what may be causing this?

Performing manual extract...
Running on Advanced Micro Devices, Inc. Oland (OpenCL).
Using plaidml.keras.backend backend.
INFO:plaidml:Opening device "opencl_amd_oland.0"
  0%|                                                                                           | 0/20 [00:00<?, ?it/s]Traceback (most recent call last):
  File "C:\DeepFaceLabOpenCLSSE\_internal\DeepFaceLab\main.py", line 213, in <module>
    arguments.func(arguments)
  File "C:\DeepFaceLabOpenCLSSE\_internal\DeepFaceLab\main.py", line 35, in process_extract
    'multi_gpu' : arguments.multi_gpu,
  File "C:\DeepFaceLabOpenCLSSE\_internal\DeepFaceLab\mainscripts\Extractor.py", line 719, in main
    data = ExtractSubprocessor ([ ExtractSubprocessor.Data(filename) for filename in input_path_image_paths ], 'landmarks', image_size, face_type, debug_dir, cpu_only=cpu_only, manual=True, manual_window_size=manual_window_size).run()
  File "C:\DeepFaceLabOpenCLSSE\_internal\DeepFaceLab\joblib\SubprocessorBase.py", line 221, in run
    data = self.get_data(cli.host_dict)
  File "C:\DeepFaceLabOpenCLSSE\_internal\DeepFaceLab\mainscripts\Extractor.py", line 388, in get_data
    ], (1, 1, 1) )*255).astype(np.uint8)
  File "C:\DeepFaceLabOpenCLSSE\_internal\DeepFaceLab\imagelib\text.py", line 63, in get_draw_text_lines
    draw_text_lines ( image, rect, text_lines, color, border, font)
  File "C:\DeepFaceLabOpenCLSSE\_internal\DeepFaceLab\imagelib\text.py", line 59, in draw_text_lines
    draw_text (image, (l, i*h_per_line, r, (i+1)*h_per_line), text_lines, color, border, font)
  File "C:\DeepFaceLabOpenCLSSE\_internal\DeepFaceLab\imagelib\text.py", line 46, in draw_text
    image[t:b, l:r] += get_text_image (  (r-l,b-t,c) , text, color, border, font )
ValueError: operands could not be broadcast together with shapes (16,1367,3) (1367,16,3) (16,1367,3)

The same machine I was using it in I had also used a RX 460 in that worked decently well other than it crashing every 8-20 hours though I can't be sure if it is the fault of the card or the PC itself (used restored super old Mac Pro 2008 8x cores, 32gb RAM)

And yes, I always blow the drivers away with DDU before attempting a card swap.
 

FreakoNature

DF Vagrant
iperov said:
what version you are using?


download latest version from mega



I was using the April 5th build of the OpenCL version for this card. But the similar error I saw on the 5450 was with a March release.

Thanks for the quick reply!

I have my main gaming PC with an R9 290 doing iterations perfectly fine. So the software has worked for me on other machines. Also did some uh... testing on some Intel HD 620/520 machines I was setting up for users at work (deleted after testing, was only learning the program at the time). The Intel HDs mostly worked fine (though the ones slower than the 620 seemed too weak to do manual face setting easily).
 

FreakoNature

DF Vagrant
Will try.


iperov said:
use 7 april build

Wow, that worked. Thanks very much. Looks like I got here at just the right time.

Edit: Shit NVM. It can extract faces now but now it fails during training.

Running trainer.

Loading model...

Model first run. Enter model options as default for each run.
Write preview history? (y/n ?:help skip:n) :
n
Target iteration (skip:unlimited/default) :
0
Batch_size (?:help skip:0) :
0
Feed faces to network sorted by yaw? (y/n ?:help skip:n) :
n
Flip faces randomly? (y/n ?:help skip:y) :
y
Src face scale modifier % ( -30...30, ?:help skip:0) :
0
Use lightweight autoencoder? (y/n, ?:help skip:n) :
n
Use pixel loss? (y/n, ?:help skip: n/default ) :
n
Using plaidml.keras.backend backend.
INFO:plaidml:Opening device "opencl_amd_oland.0"
Loading: 100%|########################################################################| 86/86 [00:00<00:00, 335.10it/s]
Loading: 100%|########################################################################| 29/29 [00:00<00:00, 358.05it/s]
===== Model summary =====
== Model name: H128
==
== Current iteration: 0
==
== Model options:
== |== batch_size : 4
== |== sort_by_yaw : False
== |== random_flip : True
== |== lighter_ae : False
== |== pixel_loss : False
== Running on:
== |== [0 : Advanced Micro Devices, Inc. Oland (OpenCL)]
=========================
Starting. Press "Enter" to stop training and save model.
INFO:plaidml:Analyzing Ops: 77 of 1563 operations complete
INFO:plaidml:Analyzing Ops: 224 of 1563 operations complete
INFO:plaidml:Analyzing Ops: 236 of 1563 operations complete
INFO:plaidml:Analyzing Ops: 324 of 1563 operations complete
INFO:plaidml:Analyzing Ops: 328 of 1563 operations complete
INFO:plaidml:Analyzing Ops: 340 of 1563 operations complete
INFO:plaidml:Analyzing Ops: 374 of 1563 operations complete
INFO:plaidml:Analyzing Ops: 379 of 1563 operations complete
INFO:plaidml:Analyzing Ops: 430 of 1563 operations complete
INFO:plaidml:Analyzing Ops: 559 of 1563 operations complete
INFO:plaidml:Analyzing Ops: 581 of 1563 operations complete
INFO:plaidml:Analyzing Ops: 593 of 1563 operations complete
INFO:plaidml:Analyzing Ops: 634 of 1563 operations complete
INFO:plaidml:Analyzing Ops: 684 of 1563 operations complete
INFO:plaidml:Analyzing Ops: 696 of 1563 operations complete
INFO:plaidml:Analyzing Ops: 730 of 1563 operations complete
INFO:plaidml:Analyzing Ops: 735 of 1563 operations complete
INFO:plaidml:Analyzing Ops: 979 of 1563 operations complete
ERROR:plaidml:Unable to allocate device-local memory: CL_MEM_OBJECT_ALLOCATION_FAILURE
Error: Unable to allocate device-local memory: CL_MEM_OBJECT_ALLOCATION_FAILURE
Traceback (most recent call last):
File "C:\DeepFaceLabOpenCLSSE\_internal\DeepFaceLab\mainscripts\Trainer.py", line 93, in trainerThread
iter, iter_time = model.train_one_iter()
File "C:\DeepFaceLabOpenCLSSE\_internal\DeepFaceLab\models\ModelBase.py", line 362, in train_one_iter
losses = self.onTrainOneIter(sample, self.generator_list)
File "C:\DeepFaceLabOpenCLSSE\_internal\DeepFaceLab\models\Model_H128\Model.py", line 84, in onTrainOneIter
total, loss_src_bgr, loss_src_mask, loss_dst_bgr, loss_dst_mask = self.ae.train_on_batch( [warped_src, target_src_mask, warped_dst, target_dst_mask], [target_src, target_src_mask, target_dst, target_dst_mask] )
File "C:\DeepFaceLabOpenCLSSE\_internal\python-3.6.8\lib\site-packages\keras\engine\training.py", line 1217, in train_on_batch
outputs = self.train_function(ins)
File "C:\DeepFaceLabOpenCLSSE\_internal\python-3.6.8\lib\site-packages\plaidml\keras\backend.py", line 176, in __call__
self._invoker.invoke()
File "C:\DeepFaceLabOpenCLSSE\_internal\python-3.6.8\lib\site-packages\plaidml\__init__.py", line 1441, in invoke
return Invocation(self._ctx, self)
File "C:\DeepFaceLabOpenCLSSE\_internal\python-3.6.8\lib\site-packages\plaidml\__init__.py", line 1450, in __init__
self._as_parameter_ = _lib().plaidml_schedule_invocation(ctx, invoker)
File "C:\DeepFaceLabOpenCLSSE\_internal\python-3.6.8\lib\site-packages\plaidml\__init__.py", line 765, in _check_err
self.raise_last_status()
File "C:\DeepFaceLabOpenCLSSE\_internal\python-3.6.8\lib\site-packages\plaidml\library.py", line 131, in raise_last_status
raise self.last_status()
plaidml.exceptions.Unknown: Unable to allocate device-local memory: CL_MEM_OBJECT_ALLOCATION_FAILURE
Done.
Press any key to continue . . .
 

FreakoNature

DF Vagrant
This message doesn't seem to really have an answer unfortunately. The card has 4 GB of VRAM but no real answer as to why it does this. I just tested with an AMD 340X which is basically the exact same card but with only 2GB of VRAM.
 
Top