OpenFaceSwap DeepFakes Software

OpenFaceSwap: A DeepFakes GUI

What is OpenFaceSwap?

OpenFaceSwap is a free and open source end user package based on the faceswap community GitHub repository. OpenFaceSwap includes:

  • A portable Winpython environment with all necessary python dependencies pre-installed.
  • A fully functional GUI that does more than simply echo python commands.
  • An exact copy of the most recent faceswap package.
  • Two additional custom faceswap packages designed for high or low performance.

faceswap gui screenshot

How do I obtain OpenFaceSwap?

Download the BETA version here:

WARNING: Use at your own risk. This is a work in progress and by downloading this package you agree that we are not responsible for any loss or damages you incur from OpenFaceSwap.

Download OpenFaceSwap version 0.9 (Mega.nz)

Current security hashes:

  • File size: 458,892,751
  • MD5: 05986B2E914307FD87646D82CFC56C9D
  • SHA1: F32EE526ADCDF7A241ECD61B74EA3EEBA763F6D2
  • SHA256: 413FA0648D9BD1BBAB5114086AB8342F60C8E00861B610C9E0D9B7E58A1E1B90

Previous hashes:

  • File size: 458,870,358 bytes
  • MD5: 0F8205DE6D7784B03C063D2B7FDC3B47
  • SHA1: 003C052804534F77151C851A9EC8D24D68382839
  • SHA256: D3856B021ED7946114BB2495A9161A070CF859E79F9B8B325B114D3A9C215491

Requirements:

  1. Microsoft Visual Studio Redistributable 2015 (Link)
  2. CUDA 9.0, NOT 9.1 (Link)
  3. cuDNN 7.05, NOT 7.1, (Link, requires email registration)
  4. Latest NVIDIA graphics card drivers (Link)
  5. Windows 10 or Windows 2016 Server, Intel processor second generation (Sandy Bridge) or later, NVIDIA  graphics card with compute capability 3.0 or higher

For instructions on how to use OpenFaceSwap, jump down to here.

Why use OpenFaceSwap?

OpenFaceSwap offers a number of benefits over current deepfakes software packages.

Convenience

  • Simple installation of the GUI and the accompanying portable python environment. No downloading of additional files or local compiling necessary.
  • Light and fast package that fits in an installer with less than 450 MB. No unpacking of large temporary files for each runtime, either.
  • A true GUI that can execute a complete deepfakes workflow with only mouse clicks.
  • Saving and loading of configurations as well as custom commands.
  • Shortcut icons for examining and editing output folders in Windows Explorer.
  • An audio syncing option for video creation.
  • Use and management of default folders for beginners who want to avoid details.
  • No obnoxious watermarks.

Flexibility

  • Option to override any step with arbitrary custom commands.
  • Able to view and edit all code for complete customization.
  • Three provided faceswap packages, and the ability to import and use additional faceswap packages.
  • A GUI shell that is compatible with other python installations or compiled binaries. Simply adjust the engine configuration file.

Performance

  • Multi-threading for more efficient GPU usage that provides 50-75% faster training (see benchmarks).
  • Tensorflow 1.7 with AVX compilation, CUDA 9.0, and cuDNN 7.0 that is 5-10% faster than Tensorflow 1.5, and 10-20% faster than Tensorflow 1.4.
  • An experimental faceswap package with loss balancing for up to 2x faster training of difficult data sets. Combined with the other features, this can provide up to 3x faster training than FakeApp.
  • Pre-compiled dlib 19.10.0 with full GPU support.
  • An experimental faceswap package including a dfaker plugin for 128 x 128 pixel resolution, large face area outputs (see benchmarks).
  • A low memory faceswap package to accommodate users of 2GB graphics cards.
  • All community plugins including GAN, GAN128, and IAE models.

Transparency

  • All source code is included in the installed package as human-readable Ruby or Python files. The dangers of closed source deepfakes software have been documented by Malwarebytes and Reddit.
  • The GUI shell may be run by installing Ruby Shoes and loading the .rb file.
  • The python source code may be run on any other python installation provided that the necessary dependencies are present.
  • The GUI and python backends are all released under the GNU GPL3 license.
  • Documented security hashes for verification of safe installation downloads.

How to use OpenFaceSwap?

Installation

Install the 4 pre-requisites listed above. For CUDA, choose a custom installation. Ignore any warnings from CUDA about the lack of a C++ compiler. Do not install the drivers through CUDA.

Installing cuDNN:

  • Unzip the cuDNN files.
  • You need to manually copy 3 files from cuDNN to the corresponding folders in your CUDA installation, which is usually at C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v9.0
  • Copy cuda\bin\cudnn64_7.dll to C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v9.0\bin
  • Copy cuda\include\cudnn.h to C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v9.0\include
  • Copy cuda\lib\x64\cudnn.lib to C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v9.0\lib\x64

Download and run the .exe OpenFaceSwap Installer. Choose a directory for installation that is not in Program Files due to possible user permission errors. The default is set to “C:\OpenFaceSwap”.

Wait while the files are unpacked.

Double click the desktop shortcut, or run the file OpenFaceSwap.exe from your installation directory. The very first time you run, it may take several minutes for anything to happen. You can check your Task Manager to make sure OpenFaceSwap.exe is present and active.

Beginner’s usage

For beginners or first-time users, you can simply press the buttons without worrying about the details. Later you may wish to explore further.

To get started, click the “Video A” button and select a video file. This will be the video with the face you want to change.

Click “Images A” to extract all of the frames from Video A. When done, hit any key at the prompt to continue. If you like, you can press the magnifying glass icon next to the directory name to inspect the results.

Click “Faces A” to extract all of the faces from Image Set A. Ideally, your video only has one face present. You may wish to inspect your results and remove any erroneous face extractions.

Repeat the above three steps with Video B, which is the face that you will insert. If you have a set of images instead of a video, you can instead skip ahead to the “Images B” text box. Click the folder icon to select the directory that has all of your images. Then, proceed to click the “Faces B” button as before.

Click “Model”. When you hover over the button, the necessary input folders will be highlighted. In this case, those are folders for Faces A and Faces B. Training your model will take many hours. You can wait until the printed loss values are less than 0.02. Also, check the previews for the quality of the faceswap. When ready, press the Enter key to stop training.

Click “Swaps” to apply your model to turn face A into face B. If you hover over the button, you will see that you need input folders for Faces A and Model. When this is finished, you may wish to inspect your results as before.

Finally, click “Movie” to generate movie file. Your movie file will be named as shown in the text box and be placed within your OpenFaceSwap installation directory. Click the magnifying glass to open up the folder and play the movie file.

When you are done, you may wish to click the Trash icon and empty your default folders. If you want to delete your model files, you can also do that by checking the appropriate box.

Advanced usage

Click the gear icon next to each command to see a number of options.

Not all command line options are available from the GUI. You can enter custom commands by checking the “Custom” box. You may wish to highlight and copy the original commands first and then edit them.

You can save and load all of your settings, including your custom commands, using the icons in the upper left corner.

The GUI shell runs using python backends or “engines”. The default engine in the installation is an exact copy of the most recent faceswap GitHub repository. To load the experimental or low memory faceswap packages, edit the openfaceswapconfig.txt file to point to the appropriate paths. This will normally only involve inserting a “_exp” or “_lowmem” in the appropriate paths.

Note that you can mix and match different extraction and conversion scripts from different packages in the engine configuration file, although there could be unforseen compatibility issues.

Some notes on the engines:

  • DFaker only works in the experimental engine.
  • The Original model uses loss balancing in the experimental engine with a minimum training ratio of 1:4 (see the code).
  • The LowMem model in the low memory engine should work for 2GB graphics cards. The extraction uses face_recognition instead of face-alignment, so the results will be slightly different. This can be useful if you are having errors with one extraction module. Note that the alignments.json files in the experimental engine have a slightly different format.

The portable Winpython package is a complete and independent no-install platform. If you wish to use the python package, run the “WinPython Command Prompt.exe” prompt from the “python” directory. This will setup the proper environment and let you use commands such as “python faceswap.py” from the CLI.

 

Questions or requests

Report all bugs and ask questions in the forums. There are too many comments on this page, which interferes with loading the post. Most new comments will be deleted. Please use the forums if you can.

Known bugs or issues

The Original model is the most thoroughly tested model. Others, such as GAN and IAE may give unexpected results or errors.

The Converter Adjust setting does not function properly. This is due to the underlying python code.

 

323 thoughts on “OpenFaceSwap DeepFakes Software

  1. Got a GTX970 with 4G Vram

    But the program OOM on me, i have to put in LowMem Batch 64 to work, am i missing something? or my card is too low? to run the program in full?

    • 4GB is a bit low for the current implementation. See if you can use the Original model with batch = 16. It is better to use the Original model and reduce the batch size if that is possible.

  2. Face Extraction failed:

    Traceback (most recent call last):
    File “faceswap\faceswap.py”, line 8, in
    from lib.cli import FullHelpArgumentParser
    File “C:\OpenFaceSwap\faceswap\lib\cli.py”, line 7, in
    from lib.FaceFilter import FaceFilter
    File “C:\OpenFaceSwap\faceswap\lib\FaceFilter.py”, line 3, in
    import face_recognition
    File “C:\OpenFaceSwap\python\python-3.6.3.amd64\lib\site-packages\face_recognition\__init__.py”, line 7, in
    from .api import load_image_file, face_locations, batch_face_locations, face_landmarks, face_encodings, compare_faces, face_distance
    File “C:\OpenFaceSwap\python\python-3.6.3.amd64\lib\site-packages\face_recognition\api.py”, line 4, in
    import dlib
    ImportError: DLL load failed: Das angegebene Modul wurde nicht gefunden.
    Drücken Sie eine beliebige Taste . . .

    • Install the exact pre-requisites including cuDNN (answered your question below). You also need an AVX supporting CPU, which most people already have.

    • They go in the corresponding folders where CUDA is installed. You have to copy 3 files to 3 separate locations. I updated the post above with more detailed directions.

  3. The GUI is working well for me – many thanks for putting the effort into this. If there are ways the community at large can support the effort (the non-devs) let us know. My results from GAN128 and dfaker have been less good than using Original, however I think that’s probably more an output of lining up my source material – can see the light at the end of the tunnel. You have made something easy that was hard, thank you very much.

    • You’re welcome and thanks for your kind words. Right now, just feedback and spreading the word would be helpful. Try to count the number of images processed (batch size x iteration number). GAN128 and dfaker or much more resource intensive. If you have only processed 1 million images, it won’t look as good as the Original processing 10 million.

      I also haven’t tested GAN/GAN128 at all, as it seemed to be in progress. If you see any issues, feel free to bring them up.

  4. Have it up and running!
    I just had to push the Files into the NVIDIA Toolkit installer directory.

    BUT: seems my vga card is unsupportet:
    2018-04-06 05:34:33.742598: I T:\src\github\tensorflow\tensorflow\core\common_runtime\gpu\gpu_device.cc:1394] Ignoring visible gpu device (device: 0, name: Tesla M2070, pci bus id: 0000:07:00.0, compute capability: 2.0) with Cuda compute capability 2.0. The minimum required Cuda capability is 3.0.

    That sounds really bad. Is there a workaround? Please Help!

    • Unfortunately, CUDA compute 3.0 is required. Your card is too slow.

      Unless you get a new card or use a cloud service, there’s not much you can do right now. This package is not build for CPU-only, but CPU-only is too slow to be practical.

    • It is extremely difficult. I’ve actually been trying to get it to work for a long time on Windows… fingers crossed, but unlikely.

  5. Hi,

    thanks for this great tutorial. But i have the same Problem like gierigeseele.

    Traceback (most recent call last):
    File “faceswap_lowmem\faceswap.py”, line 10, in
    from scripts.extract import ExtractTrainingData
    File “D:\OpenFaceSwap\faceswap_lowmem\scripts\extract.py”, line 7, in
    from lib.cli import DirectoryProcessor
    File “D:\OpenFaceSwap\faceswap_lowmem\lib\cli.py”, line 6, in
    from lib.FaceFilter import FaceFilter
    File “D:\OpenFaceSwap\faceswap_lowmem\lib\FaceFilter.py”, line 3, in
    import face_recognition
    File “D:\OpenFaceSwap\python\python-3.6.3.amd64\lib\site-packages\face_recognition\__init__.py”, line 7, in
    from .api import load_image_file, face_locations, batch_face_locations, face_landmarks, face_encodings, compare_faces, face_distance
    File “D:\OpenFaceSwap\python\python-3.6.3.amd64\lib\site-packages\face_recognition\api.py”, line 4, in
    import dlib
    ImportError: DLL load failed: Das angegebene Modul wurde nicht gefunden. (not found)

    I istalled all pre-requisites, but still the same error. I downloaded cuDNN 7.05 for Cuda 9.0 not the cuDNN 7.05 for Cuda 9.1. I think that was correct?

    Thanks anyway, good job and great tutorial.

    • Did you transfer the cuDNN files into the directories after you unzip? That seemed to fix gierigeseele’s initial issue.

      Yes, cuDNN for CUDA 9.0 should be right.

  6. Had the same problem, actually installing cuDNN as above worked, but only after i reinstalled OpenFacSwap after installing cuDNN.
    Anyhow, now I have the next problem, when I align the faces, now faces are recognized/ extracted / come to the align folder, even though I am of the view to use comparably simple images

    • What is the output from the console when you run the face extraction? Does it say “no face detected” or something similar? Does it have a summary at the end with 0 faces?

      • Does it have a summary at the end with 0 faces?

        For my part i had this message with 3 different videos . It never detect faces 🙁

        • Double-check your CUDA/cuDNN installation. Also, are you using Windows 10, Intel CPU with AVX, NVIDIA graphics card with compute capability 3.0+?

  7. Hi, in requirements you say “Intel processor second generation (Sandy Bridge) or later”.
    Can be possible use it on i930 (first gen)?

  8. I think that on your Installing cuDNN guide, the line “Copy cuda\bin\cudnn64_7.dll to C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v9.0” miss the “\bin\” in the end, leading to the errors above

  9. It completes the whole process in face creation and at the ends gives the number of detected pictures and 0 detected faces

    • Can you copy the exact command you input? (From the command box.)

      Is the number of detected pictures correct, like if you have 500 images in the folder, it says 500 images detected.

      Check that you are using all one type of image, jpg or png, not mixing.

      Are there any other error messages or warnings you can see.

  10. So, as I understand it if attempting to train GAN you want to train a model first; and then, if there are objects that clip over the face that should be handled once the base model is trained? Does this package use the release from 2018-03-17 – it seems like that one is designed to handle the stages. Where do I go to find out how to train GAN properly?

    Also, where do i go to find the difference between normal and IAE.

    PS: So far, this seems miles above using something like fakeapp, so thanks for the release. The normal models do seem to move faster.

    • I haven’t tested GAN. I think what you are referring to is the fact that the masking is added in later after some training occurs. This is all automatic, and you don’t do anything extra yourself (except wait longer).

      The package should include everything at the start of April.

      My impression was that GAN was still a work in progress, but you can always check the github (either the main one, or the original author before the port).

      Original comments on IAE model:
      https://github.com/deepfakes/faceswap/pull/251

      A couple other threads on it, such as this:
      https://github.com/deepfakes/faceswap/issues/283

      Thanks for your feedback, appreciated.

  11. Seemed like it was working fine, but after restarting my computer for the fist time since installing, when I try to open it it gives me an error saying,
    “Runtime Error! This application has requested the Runtime to terminate it in an unusual way. Please contact the application’s support team for more information.” and won’t open.

    • I’ve gotten nothing but that error. Installed all the requirements except the specific version of “Microsoft Visual Studio Redistributable 2015” linked above. When I tried to install it, it refused, saying I already have a newer version installed. I don’t want to uninstall that, not knowing what other dependencies might be broken.

      Is the specific version linked above required?

      • I tried the bcdedit fix, and it had no effect. I still get the error. From what I’ve read, that boot configuration option would have no effect on a 64-bit OS, which I’m running (Win10), since it is not subject to the 4GB address space limitation, and hence does not need to partition kernel and application address space nearly as tightly.

        I also uninstalled all versions of MVSR 2015 and re-installed only the one linked above. No change.

        • Can’t reply further within the comments (tree limit, go to forum if you need further). Yeah, your system specs are fine. I am even jealous of them.

          This error seems to be all over the place for other programs, no good solutions online. When was the last time you had a Windows update? Or anything weird with user permissions?

          The other thing you can try is to install Ruby Shoes for Windows and try to run a very simple test program, like a hello world… see if that works.

  12. Getting OOM error with gtx980 4gb VRAM tried playing with the Batch size and its only working on low mem setting. Anything else i can try?

    • Responded in the forums.

      Also, some new python backends are being developed, although we’ll have to wait to see how it goes. Might help out.

  13. It would be very helpful to have just a bit more explanation of the options.

    For instance, what does “epochs” mean?

    An addendum to your main post with a short explanation of options would be very welcome.

  14. Awesome. Left the scene for a bit due to limitations. Got to say though, good breakthroughs . You’re the man DFC.

  15. Got a CORE i5 8th Gen, GTX1050 with 4G Vram

    The program always say ran out of memory trying to allocate 2.22GiB. The caller indicates that this is not a failure, but may mean that there could be performance gains if more memory were available in original mode, i have to put in LowMem Batch 64 to work, is that mean i missing something? or my card is too low?

    • If the training runs fine, you can ignore the warning. Otherwise, the first thing to try is to reduce the batch size.

      It is better to use the Original model with batch size = 16 or even 8, if that works without memory errors.

      I think some people have issues with 4GB and the python scripts, but others can get it to work. Close any other programs and get rid of any fancy Windows animations, Aero, etc. settings.

    • You need to replace it in 4 places, so something like:

      Base split A command
      ffmpeg
      Base align A command
      python\scripts\python.bat faceswap_lowmem\faceswap.py extract
      Base split B command
      ffmpeg
      Base align B command
      python\scripts\python.bat faceswap_lowmem\faceswap.py extract
      Base train command
      python\scripts\python.bat faceswap_lowmem\faceswap.py train
      Base merge command
      python\scripts\python.bat faceswap_lowmem\faceswap.py convert
      Base video command
      ffmpeg

  16. Can you help me please? This happen when i press MODEL button

    [[Node: loss_1/mul/_597 = _Recv[client_terminated=false, recv_device=”/job:localhost/replica:0/task:0/device:CPU:0″, send_device=”/job:localhost/replica:0/task:0/device:GPU:0″, send_device_incarnation=1, tensor_name=”edge_1685_loss_1/mul”, tensor_type=DT_FLOAT, _device=”/job:localhost/replica:0/task:0/device:CPU:0″]()]]
    Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.

    Caused by op ‘model_1/conv2d_5/convolution’, defined at:
    File “C:\OpenFaceSwap\python\python-3.6.3.amd64\lib\threading.py”, line 884, in _bootstrap
    self._bootstrap_inner()
    File “C:\OpenFaceSwap\python\python-3.6.3.amd64\lib\threading.py”, line 916, in _bootstrap_inner
    self.run()
    File “C:\OpenFaceSwap\python\python-3.6.3.amd64\lib\threading.py”, line 864, in run
    self._target(*self._args, **self._kwargs)
    File “C:\OpenFaceSwap\faceswap\scripts\train.py”, line 147, in processThread
    model = PluginLoader.get_model(trainer)(get_folder(self.arguments.model_dir), self.arguments.gpus)
    File “C:\OpenFaceSwap\faceswap\plugins\Model_Original\AutoEncoder.py”, line 16, in __init__
    self.initModel()
    File “C:\OpenFaceSwap\faceswap\plugins\Model_Original\Model.py”, line 22, in initModel
    self.autoencoder_A = KerasModel(x, self.decoder_A(self.encoder(x)))
    File “C:\OpenFaceSwap\python\python-3.6.3.amd64\lib\site-packages\keras\engine\topology.py”, line 603, in __call__
    output = self.call(inputs, **kwargs)
    File “C:\OpenFaceSwap\python\python-3.6.3.amd64\lib\site-packages\keras\engine\topology.py”, line 2061, in call
    output_tensors, _, _ = self.run_internal_graph(inputs, masks)
    File “C:\OpenFaceSwap\python\python-3.6.3.amd64\lib\site-packages\keras\engine\topology.py”, line 2212, in run_internal_graph
    output_tensors = _to_list(layer.call(computed_tensor, **kwargs))
    File “C:\OpenFaceSwap\python\python-3.6.3.amd64\lib\site-packages\keras\layers\convolutional.py”, line 164, in call
    dilation_rate=self.dilation_rate)
    File “C:\OpenFaceSwap\python\python-3.6.3.amd64\lib\site-packages\keras\backend\tensorflow_backend.py”, line 3195, in conv2d
    data_format=tf_data_format)
    File “C:\OpenFaceSwap\python\python-3.6.3.amd64\lib\site-packages\tensorflow\python\ops\nn_ops.py”, line 782, in convolution
    return op(input, filter)
    File “C:\OpenFaceSwap\python\python-3.6.3.amd64\lib\site-packages\tensorflow\python\ops\nn_ops.py”, line 870, in __call__
    return self.conv_op(inp, filter)
    File “C:\OpenFaceSwap\python\python-3.6.3.amd64\lib\site-packages\tensorflow\python\ops\nn_ops.py”, line 522, in __call__
    return self.call(inp, filter)
    File “C:\OpenFaceSwap\python\python-3.6.3.amd64\lib\site-packages\tensorflow\python\ops\nn_ops.py”, line 206, in __call__
    name=self.name)
    File “C:\OpenFaceSwap\python\python-3.6.3.amd64\lib\site-packages\tensorflow\python\ops\gen_nn_ops.py”, line 1039, in conv2d
    data_format=data_format, dilations=dilations, name=name)
    File “C:\OpenFaceSwap\python\python-3.6.3.amd64\lib\site-packages\tensorflow\python\framework\op_def_library.py”, line 787, in _apply_op_helper
    op_def=op_def)
    File “C:\OpenFaceSwap\python\python-3.6.3.amd64\lib\site-packages\tensorflow\python\framework\ops.py”, line 3290, in create_op
    op_def=op_def)
    File “C:\OpenFaceSwap\python\python-3.6.3.amd64\lib\site-packages\tensorflow\python\framework\ops.py”, line 1654, in __init__
    self._traceback = self._graph._extract_stack() # pylint: disable=protected-access

    ResourceExhaustedError (see above for traceback): OOM when allocating tensor with shape[64,2048,4,4] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc
    [[Node: model_1/conv2d_5/convolution = Conv2D[T=DT_FLOAT, data_format=”NCHW”, dilations=[1, 1, 1, 1], padding=”SAME”, strides=[1, 1, 1, 1], use_cudnn_on_gpu=true, _device=”/job:localhost/replica:0/task:0/device:GPU:0″](model_1/conv2d_5/convolution-0-TransposeNHWCToNCHW-LayoutOptimizer, conv2d_5/kernel/read)]]
    Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.

    [[Node: loss_1/mul/_597 = _Recv[client_terminated=false, recv_device=”/job:localhost/replica:0/task:0/device:CPU:0″, send_device=”/job:localhost/replica:0/task:0/device:GPU:0″, send_device_incarnation=1, tensor_name=”edge_1685_loss_1/mul”, tensor_type=DT_FLOAT, _device=”/job:localhost/replica:0/task:0/device:CPU:0″]()]]
    Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.

  17. Images detected are correct.

    No error warnings at all

    As copy does not work in cmd I just type the important part here:

    Loading Extraxt from Extract_Align plugin…
    100%
    Alignments filepath : [path image dir\alignments.json]
    Writing alignments to: [as above]
    ——–
    Images found: 2285
    Faces detected: 0
    ——–
    Done!
    hit any key

    GPU is 950GTX
    CPU I5-4590

    • Can you copy and paste the text in the “Command” text box? What command is being sent?

      Also, do you have the same problem with different image sets?

  18. Loading Model from Model_Original plugin…
    WARNING:tensorflow:From C:\OpenFaceSwap\python\python-3.6.3.amd64\lib\site-packages\keras\backend\tensorflow_backend.py:1264: calling reduce_prod (from tensorflow.python.ops.math_ops) with keep_dims is deprecated and will be removed in a future version.
    Instructions for updating:
    keep_dims is deprecated, use keepdims instead
    WARNING:tensorflow:From C:\OpenFaceSwap\python\python-3.6.3.amd64\lib\site-packages\keras\backend\tensorflow_backend.py:1349: calling reduce_mean (from tensorflow.python.ops.math_ops) with keep_dims is deprecated and will be removed in a future version.
    Instructions for updating:
    keep_dims is deprecated, use keepdims instead
    Failed loading existing training data.
    Unable to open file (unable to open file: name = ‘C:\OpenFaceSwap\model\encoder.h5’, errno = 2, error message = ‘No such file or directory’, flags = 0, o_flags = 0)
    Loading Trainer from Model_Original plugin…
    Starting. Press “Enter” to stop training and save model
    Exception in thread Thread-2:
    Traceback (most recent call last):
    File “C:\OpenFaceSwap\python\python-3.6.3.amd64\lib\threading.py”, line 916, in _bootstrap_inner
    self.run()
    File “C:\OpenFaceSwap\faceswap\lib\utils.py”, line 64, in run
    for item in self.generator:
    File “C:\OpenFaceSwap\faceswap\lib\training_data.py”, line 23, in minibatch
    assert length >= batchsize, “Number of images is lower than batch-size (Note that too few images may lead to bad training). # images: {}, batch-size: {}”.format(length, batchsize)
    AssertionError: Number of images is lower than batch-size (Note that too few images may lead to bad training). # images: 0, batch-size: 64

    • Your training data size is smaller than your batch size.

      Either reduce your batch size or increase the number of images in your training data (make sure to run the face extract again).

  19. I keep getting this…any suggestions?

    Traceback (most recent call last):
    File “faceswap\faceswap.py”, line 8, in
    from lib.cli import FullHelpArgumentParser
    File “C:\OpenFaceSwap\faceswap\lib\cli.py”, line 7, in
    from lib.FaceFilter import FaceFilter
    File “C:\OpenFaceSwap\faceswap\lib\FaceFilter.py”, line 3, in
    import face_recognition
    File “C:\OpenFaceSwap\python\python-3.6.3.amd64\lib\site-packages\face_recognition\__init__.py”, line 7, in
    from .api import load_image_file, face_locations, batch_face_locations, face_landmarks, face_encodings, compare_faces, face_distance
    File “C:\OpenFaceSwap\python\python-3.6.3.amd64\lib\site-packages\face_recognition\api.py”, line 4, in
    import dlib
    ImportError: DLL load failed: The specified module could not be found.
    Press any key to continue . . .

    • The most common cause so far is that cuDNN/CUDA are not installed properly.

      Make sure you use the exact version (CUDA 9.0, not 9.1, and cuDNN 7.05 for CUDA 9.0, NOT cuDNN 7.1, and NOT cuDNN 7.05 for CUDA 9.1).

      Also, to install cudNN, you need to copy three files to three locations as described above.

      Double-check you don’t have any other CUDA installs interfering.

      Finally, make sure you have the latest NVIDIA graphics drivers installed. Do NOT install drivers during the CUDA install. Use the separate NVIDIA graphics driver installer.

  20. Getting zero faces detected

    C:\OpenFaceSwap\python\python-3.6.3.amd64\lib\site-packages\h5py\__init__.py:36: FutureWarning: Conversion of the second argument of issubdtype from `float` to `np.floating` is deprecated. In future, it will be treated as `np.float64 == np.dtype(float).type`.
    from ._conv import register_converters as _register_converters
    Using TensorFlow backend.
    Input Directory: C:\OpenFaceSwap\imgA
    Output Directory: C:\OpenFaceSwap\alignA
    Filter: filter.jpg
    Using json serializer
    Starting, this may take a while…
    Loading Extract from Extract_Align plugin…
    100%|████████████████████████████████████████████████████████████████████████████████| 353/353 [00:49<00:00, 7.16it/s]
    Alignments filepath: C:\OpenFaceSwap\imgA\alignments.json
    Writing alignments to: C:\OpenFaceSwap\imgA\alignments.json
    ————————-
    Images found: 353
    Faces detected: 0
    ————————-

    • Can you post in the forums. Another use trying to install python manually from scratch has the same user. Trying to figure out together what is going on for the “0 faces” cases.

      It doesn’t sound like it is a package issue… something with the environment, like Windows 7, CPU type, etc.

  21. Congratulations, it’s fantastic to have everything gathered in the same application, realy good job. For me : original, LowMem, Gan, Gan128 and IAE works for me. but I have an error message for Dfaker, do you have an idea of ​​what’s wrong?
    (Windows 10 / Ram 16Go / I7 3930K / GTX 1080 Ti)

    Loading Trainer from Model_DFaker plugin…
    0%| | 0/1008 [00:00<?, ?it/s]Exception in thread Thread-1:
    Traceback (most recent call last):
    File "C:\OpenFaceSwap\python\python-3.6.3.amd64\lib\threading.py", line 916, in _bootstrap_inner
    self.run()
    File "C:\OpenFaceSwap\python\python-3.6.3.amd64\lib\threading.py", line 864, in run
    self._target(*self._args, **self._kwargs)
    File "C:\OpenFaceSwap\faceswap_exp\scripts\train.py", line 181, in processThread
    raise e
    File "C:\OpenFaceSwap\faceswap_exp\scripts\train.py", line 153, in processThread
    trainer = trainer(model, images_A, images_B, self.arguments.batch_size, self.arguments.perceptual_loss)
    File "C:\OpenFaceSwap\faceswap_exp\plugins\Model_DFaker\Trainer.py", line 68, in __init__
    images_A, landmarks_A = load_images_aligned(fn_A[:minImages])
    File "C:\OpenFaceSwap\faceswap_exp\plugins\Model_DFaker\utils.py", line 29, in load_images_aligned
    if os.path.exists(cropped):
    File "C:\OpenFaceSwap\python\python-3.6.3.amd64\lib\genericpath.py", line 19, in exists
    os.stat(path)
    TypeError: stat: path should be string, bytes, os.PathLike or integer, not NoneType

    • Did you change the openswapconfig.txt file to point to the “_exp” version of the directory?

      You need to re-run the all face extractions after doing the above.

  22. Regarding 0 faces problem, sorry, cant copy the command output, but looks same as Tofu posted above.

    Operating system is Winows 10
    Same problem on any type of picture set I have tested yet.

    By the way. In the installation Manual for CuDNN it is written to implement into the Visual Stutdio Project, but I guess this is not applicable if you are just using the runtime environment. Or?
    Reggarding posing in forum: Where in the forum?

  23. I have set the batch size to as low as ‘1’ using ‘original’ and it still giving me OOM errors.
    see : https://pastebin.com/hByNRG3x
    My PC specifications :
    Operating System
    Windows 10 Pro 64-bit
    CPU
    Intel Core i7 2600 @ 3.40GHz
    Sandy Bridge 32nm Technology
    RAM
    16.0GB Dual-Channel DDR3 @ 532MHz (7-7-7-20)
    Motherboard
    ASRock P65iCafe (CPUSocket)
    Graphics
    S22B300 (1600×[email protected])
    3071MB NVIDIA GeForce GTX 1060 3GB (ZOTAC International)
    Storage
    223GB KINGSTON SUV400S37240G ATA Device (SSD) 27 °C
    465GB Western Digital WDC WD5000AZRX-00L4HB0 (SATA)

    Did I forget to do a step? I also installed VS 2017 for CUDA.

    • Original may not work with 3GB, regardless of how small you make the batch size. Try the LowMem model.

      If that doesn’t work, you may need to use the “lowmem” engine (add _lowmem in 4 places to the openfaceswapconfig.txt file). From there, try the LowMem model, which uses even lower memory.

  24. Do you know if there’s any way to use the faceswap script but with the higer face area of dfaker? I’ve been using the python scripts for both and I really like the added detail of dfaker, specially since it can even slightly change the face shape of the person you are replacing, I also like the built in merge function, but it would be great if it also had faceswap’s blur implementation, as dfaker does often leave a subtle line at the edge of the mask. Still, while the detail is improved with dfaker, faceswap handles more difficult angles and face obstructions better, works significantly faster, and has blur settings. If faceswap could have the larger face area of dfaker that would be ideal, since the slight face shape morphing really is the best thing about it. I don’t know if it is possible to do this, if you have any tips please let me know.

    • In principle, you should be able to do that, although I’m not sure which parts of the code you have to alter. There is definitely one part of the coder where it strips out the central portion of the 256×256. I think there is a padding of maybe 48 pixels on each side. Then, it scales to 64×64. You can see the size of the matrix in the code. I’m less sure on the conversion/merge side what needs to be done.

      • True, both would be needed. I doubt dfaker merge would be compatible with faceswap either. I think I previously tried doing the opposite, trying to use faceswap merge on a dfaker model and clearly it didn’t work. Hopefully someone can come up with a better model at some point.

  25. Protip: If you are getting out of memory errors when attempting to swap, check to be sure you haven’t accidentally deleted your alignments file in your images folder; you need that one in addition to the one in the faces folder or else it probably wont work.

  26. Hey can somebody explain what all the different settings mean, I have a basic understanding of batch and layers but what is like GAN and Gan128 and like any of the other things. Thanks. ps I have a Quadro P4000 and a 6700K CPU

    • GAN/GAN128 are experimental models (works in progress, possibly) that are very expensive computationally. The point is that they are supposed to be able to create more realistic swaps, including dealing with face occlusions (like hands in front of the face).

  27. I have installed all the prerequisites, when I load a video it extracts the frames, but when I try to extract the faces the output folder is empty. It doesn’t extract any face and the cmd prompt appears for just few seconds. It doesn’t give me errors. What can I do?

  28. You can’t adjust the individual nodes and layers, unless you go into the python code directly.

    The Original model is the standard one. You can use Lowmem for a model of reduced complexity. The _lowmem engine uses an even simpler model. Each of these corresponds to a different number of “nodes” and “layers”.

    It’s unclear what layers even means in fakeapp, as people report training with 4000 layers which should be impossible.

  29. everything worked except the last part .

    [edited due to length]

    But the finished “movie” is like a few kbs. with no image nothing.

    • Was the intermediate file (without sound) correct or empty?

      Did you manually check your merged folder (swaps) to make sure the images are present with the correct naming/numbering?

  30. Is it worth trying to install on Windows 7?

    I would have tried already without asking first, but I just ordered a new GPU for my windows 7 computer, so I’ll have to wait a few days anyway.

  31. Keep getting “Failed to convert image” code 2: out of memory when trying to merge. Everything works smoothly up until that point.

    • Someone else had this error caused by mixing converter’s between model types. You must use the same model type (dfaker, etc.) for everything. Run the face extractions with the same engine/model as well. Don’t mix them. The alignments.json file is also slightly different.

      • Yeah I got it working, the end result is great on dfaker but is there any way I can move the face up a bit or am I restricted to what the extractor cropped out? Normally I would mess with the blur/erosion options but that doesn’t seem to be a thing with dfaker

          • Sounds good. The dfaker portion of the program is just phenomenal, I managed to get a great looking face in around 8-10 hours with batch size 8 on a 1060, I expected it to be much longer than that.

        • I can’t nest further into your comments, so replying here… use the forums if you want to have longer discussions.

          For dfaker, you can get good results after the time like you say, but to get sharper results, you have to wait a lot longer in my experience. Like the first 90% takes 10% of the time, the last 10% takes 90% of the time type of deal.

    • I think the 970m has a variable amount of VRAM. 4GB or more is better. If you have less, use the _lowmem engine.

  32. Also what’s this on a different laptop?

    Traceback (most recent call last):
    File “faceswap\faceswap.py”, line 8, in
    from lib.cli import FullHelpArgumentParser
    File “C:\OpenFaceSwap\faceswap\lib\cli.py”, line 7, in
    from lib.FaceFilter import FaceFilter
    File “C:\OpenFaceSwap\faceswap\lib\FaceFilter.py”, line 3, in
    import face_recognition
    File “C:\OpenFaceSwap\python\python-3.6.3.amd64\lib\site-packages\face_recognition\__init__.py”, line 7, in
    from .api import load_image_file, face_locations, batch_face_locations, face_landmarks, face_encodings, compare_faces, face_distance
    File “C:\OpenFaceSwap\python\python-3.6.3.amd64\lib\site-packages\face_recognition\api.py”, line 4, in
    import dlib
    ImportError: DLL load failed: The specified module could not be found.
    Press any key to continue . . .

    • As mentioned in other comments, make sure you install the exact requirements. CUDA 9.0, not 9.1. cuDNN 7.05 for 9.0, not 7.05 for 9.1. Also, you have to install cuDNN properly. Install the latest graphics card drivers from NVIDIA. Install redist 2015.

      • Hi, sorry to tag onto this comment chain but I get the exact same error when I attempt to extract faces and I’m at a loss. It manages to extract frames from the videos just fine, but then when I go try the face extraction from those frames it’s a bust. My hardware is above minimum requirements and using Windows 10.

        I installed all prerequisites: Redist 2015, CUDA 9.0, cuDNN 7.05 for CUDA 9.0 (and copied the files to correct locations in CUDA) and got the latest NVIDIA graphics drivers.

        After all those I installed OpenFaceSwap and still get that exact same error that a few have received. I’ve reinstalled them and checked for errors on all the prereq’s and copied the files over again and restarted computer but same error every time.

        Any thoughts and assistance would be grand 🙂 Thanks

        • Doublecheck our environmental path variable to make sure it looks “normal” + has the cuda paths in there.

          Do you have other CUDA or python installations? You may need to remove them, at least from the paths.

          What CPU and GPU are you using? Also, you have Win10 Home or Pro (not the weird stripped down one)?

  33. Got everything to work up until I click on ‘Movie’, I am using a video A, and a set of images only for B. The model is good, swaps (B->A), but clicking movie I get an error which states:

    “[image2 @000002625b7da7c0] Could not find file with path ‘.merge\[Filename].jpg’ and index in the range 0-4 .\merge\[Filename].jpg: No such file or directory

    Any tips? Thanks!

    • When you open the swaps folder, what is the naming sequence of the images? Does it go from something like filename1.jpg, filename2.jpg, and so on? Or does it skip and start from filename5.jpg? You have to adjust the ffmpeg command if you don’t start at 1 and increase consecutively with no jumps in numbering.

      What is the exact command that is entered (can copy/paste from the command bar).

      • Hello. I am having the same issue.

        ‘Could find no file with path ‘C:\Users\paulr\Documents\FaceSwapLibrary\FaceSwapSwaps\480P_600K_141193162A%d.jpg’ and index in the range 0-4′

        My Swaps Naming begins as ‘FileName67’ and ends at ‘FileName2217’

        There are QTY 1744 swaps

        My Movie Commands is ‘ffmpeg -r 25 -f image2 -y -i “C:\Users\user\Documents\FaceSwapLibrary\FaceSwapSwaps\480P_600K_141193162A”%d.jpg -vcodec libx264 -crf 15 -pix_fmt yuv420p “480P_600K_141193162A_output.mp4″‘

        How must the command (or file naming) be altered in order to work?

        • The standard command assumes you start with “1” as the index.

          You have to enter a custom command:
          before the image names (thing in quotes), you can use the -start_number option:
          -start_number 67 “filenamestuf…”

          The default settings are for:
          imageA1.jpg, imageA2.jpg…

  34. Help, keep getting this.

    Traceback (most recent call last):
    File “faceswap\faceswap.py”, line 8, in
    from lib.cli import FullHelpArgumentParser
    File “C:\OpenFaceSwap\faceswap\lib\cli.py”, line 7, in
    from lib.FaceFilter import FaceFilter
    File “C:\OpenFaceSwap\faceswap\lib\FaceFilter.py”, line 3, in
    import face_recognition
    File “C:\OpenFaceSwap\python\python-3.6.3.amd64\lib\site-packages\face_recognition\__init__.py”, line 7, in
    from .api import load_image_file, face_locations, batch_face_locations, face_landmarks, face_encodings, compare_faces, face_distance
    File “C:\OpenFaceSwap\python\python-3.6.3.amd64\lib\site-packages\face_recognition\api.py”, line 24, in
    cnn_face_detector = dlib.cnn_face_detection_model_v1(cnn_face_detection_model)
    RuntimeError: Error while calling cudaGetDevice(&the_device_id) in file D:\pythonfs\scripts\dlib\dlib\dnn\gpu_data.cpp:178. code: 38, reason: no CUDA-capable device is detected
    Press any key to continue . . .

    • As stated in the error, you need to have a CUDA-capable device. You need to install the driver and CUDA/cuDNN requirements exactly as stated above + have a card with NVIDIA compute capability 3.0+/

  35. Hi, I have a question about DFaker

    I want to change the face area in merging process. The chin of the two faces are different height, in training preview the shape correction is right but is not carried to the conversion
    Can I change any part of the code to modify the conversion area and
    cover the chin? Do u know where?

    Thanks in advance

    • Dfaker’s merging is a bit complicated.

      If you want to remove parts of the new chin (erode), you can do that with post-processing: overlay the original/new footage, use a mask and custom feather using After Effects.

      If you need to add more chin… that’s a bit hard.

        • If you think it looks okay in the previews, then you shouldn’t need to change the model, just the conversion code.

          • I have been trying changing the Convert_DFaker.py file, but I think the face it receive it’s cropped

            In this def:

            def patch_image( self, image, face_detected, size ):

            the face_detected is the entire face? or just the face cropped (without chin, etc.)

            Thanks!!

        • Can’t reply further in the comment chain due to chain length limits. Please use the forum if you need further discussion.

          I believe the face patch_image receives the full face. The next line calls the get_align_mat function seems to work on full faces.

  36. I got it working on Windows 7. I didn’t need to use any special tricks.

    It’s not quite working actually – when I try to do anything involving it actually running something through python the CMD window that opens up closes immediately.

    I figured there’s an error message there I can’t read because the window closes so fast, so I tried just copying the python command from one of the settings boxes and pasting it in a command prompt in the OFS directory. That actually worked fine.

    I’m up to the training step, and everything is working so far, but I have to copy/paste the (full) python commands from the settings box to a CMD box instead of just clicking the appropriate button.

    • Any command that makes it to console is supposed to pause waiting for a key press afterwards. So something is making the console very unhappy if it closes immediately.

      Unfortunately, I don’t have a Win7 system to test on. Wish I could see the error message that is flitting by.

  37. Thanks, my files are now in numerical order [Filename]1.jpg etc, but am still getting an error.

    the exact command is:

    ffmpeg -r 25 -f image2 -y -i “.\merge\[Filename]”%d.jpg -vcodec libx264 -crf 15 -pix_fmt yuv420p “C:\Users\[User]\Documents\[Folder]”

    • Your [Filename] stands for something else, right? Make sure you don’t have strange symbols or punctuation in the filename. Spaces should be okay.

      Also, verify that the folder .\merge exists and has the files.

      You can hand create a few images like a1.jpg, a2.jpg, a3.jpg.
      Then run the command with “.\merge\a”%d.jpg and see if you get a really short clip with 3 frames.

  38. I’m on Windows 10, fram extraction worked fine, but when trying to align faces I get the following:
    C:\OpenFaceSwap\python\python-3.6.3.amd64\lib\site-packages\h5py\__init__.py:36: FutureWarning: Conversion of the second argument of issubdtype from `float` to `np.floating` is deprecated. In future, it will be treated as `np.float64 == np.dtype(float).type`.
    from ._conv import register_converters as _register_converters
    Using TensorFlow backend.
    Input Directory: C:\OpenFaceSwap\imgA
    Output Directory: C:\OpenFaceSwap\alignA
    Filter: filter.jpg
    Using json serializer
    Starting, this may take a while…
    Loading Extract from Extract_Align plugin…
    0%| | 0/17940 [00:00 physical GPU (device: 0, name: GeForce GTX 1080, pci bus id: 0000:0b:00.0, compute capability: 6.1)
    WARNING:tensorflow:From C:\OpenFaceSwap\python\python-3.6.3.amd64\lib\site-packages\keras\backend\tensorflow_backend.py:1349: calling reduce_mean (from tensorflow.python.ops.math_ops) with keep_dims is deprecated and will be removed in a future version.
    Instructions for updating:
    keep_dims is deprecated, use keepdims instead
    0%| | 1/17940 [00:26parent()->GetConvolveAlgorithms( conv_parameters.ShouldIncludeWinogradNonfusedAlgo(), &algorithms)
    Press any key to continue . . .

    And the alignA folder remains empty

    • Did you install CUDA and cuDNN exactly as described above? The parent()->GetConvolveAlgorithms( conv_parameters.ShouldIncludeWinogradNonfusedAlgo(), &algorithms) is a CUDA error, possibly.

      Also, what is the exact command you entered (copy/paste from command box)?

  39. In the requirements, you mention Win10. I meet all the other requirements, but am on a Win7 (64 bit) machine. Would it possibly work on that?
    (getting back into this after some time off due to work – Still testing the old Radek build)

  40. C:\OpenFaceSwap\python\python-3.6.3.amd64\lib\site-packages\h5py\__init__.py:36: FutureWarning: Conversion of the second argument of issubdtype from `float` to `np.floating` is deprecated. In future, it will be treated as `np.float64 == np.dtype(float).type`.
    from ._conv import register_converters as _register_converters

    This results in 0 faces detected, what can I do?

    • The warning is irrelevant.

      Are you entering the correct paths, or putting the images in the default directories?

  41. I’m having an issue getting it up and training…

    I’ve got large data sets of extracted and aligned faces I was hoping to use to start training.

    When I put those in, and click “Model” I get this screen:
    https://imgur.com/a/WlqJDkR

    I see the command prompt flash for a portion of a second before I get here.

    Anything obvious I’m doing wrong?

    Windows 7 64 bit
    GTX 1080
    Think I’ve got all the right other stuff installed as well (most are similar to old Radek build which I still have on this machine)

    • Remove the first backslash in your paths. Also, make sure the paths are correct, and if you are not sure, include the full path (including C:\ or D:\, etc.)

      • Good call, I should’ve thought to double check the path. No harm in including full path.

        Tried that, but seems like the same is still happening?

        https://imgur.com/a/CfK5UaY
        (Tried both with and without final \ at the end of the path to same results both ways)

        • Try removing the spaces in that path names (rename folders). Spaces should be okay for the ffmpeg, but the python parts might not use it properly.

  42. How easy is this to update to the latest /faceswap build? I was using (I believe) your winpython portable install but I tried updating it to the latest faceswap git and it’s stuffed it.

    • This as well as the winpython portable stall should be easy to update. This is just a GUI shell written to interact with the portable python package. They are both set up nearly identically. This one may have slightly updated builds (tensorflow 1.7, dlib 19.10.0, etc.) but it shouldn’t matter that much.

      The python environment shouldn’t be changed by pulling the latest commit… very strange.

      The repos are only 100MB or so (at least they, were I haven’t pulled the latest). Don’t overwrite old repos if you want to be safe. Just rename the old repo directory (like faceswap_old). You should still be able to run the scripts like before.

      Then pull the latest commit into a fresh directory. To be extra safe, pull the latest commit into a completely different directory, then manually copy it into the python folders.

      Double-check that the old repo still works. Nothing should have affected it.

  43. After running Images A, are you supposed to remove any frames that the face you want to replace is not in? or only after the Faces A part?

    As I isolated the correct face in Faces A (i.e. alignA folder) but not Images A (i.e. imgA) and after running through the process, the faceswap was for every frame including a face I did not want a face swap with.

    • Do you know how to get the program to only change 1 face in the video instead of both? I deleted the face that I did not want to have switched in the Faces A folder. It didn’t work though.

      • There is a filter function, which only changes the face based on an input image you provide. You can try that, although if you have sharp angles, it may not detect the same face.

  44. I installed everything correctly including CUDA 9.0 and
    cuDNN 7.05. The first time I used it, it worked fine. Then the second time using it I get this error message after I extract images and select faces:

    ‘cscript’ is not recognized as an internal or external command,
    operable program or batch file.
    Traceback (most recent call last):
    File “faceswap\faceswap.py”, line 8, in
    from lib.cli import FullHelpArgumentParser
    File “C:\OpenFaceSwap\faceswap\lib\cli.py”, line 7, in
    from lib.FaceFilter import FaceFilter
    File “C:\OpenFaceSwap\faceswap\lib\FaceFilter.py”, line 3, in
    import face_recognition
    File “C:\OpenFaceSwap\python\python-3.6.3.amd64\lib\site-packages\face_recognition\__init__.py”, line 7, in
    from .api import load_image_file, face_locations, batch_face_locations, face_landmarks, face_encodings, compare_faces, face_distance
    File “C:\OpenFaceSwap\python\python-3.6.3.amd64\lib\site-packages\face_recognition\api.py”, line 4, in
    import dlib
    ImportError: DLL load failed: The specified module could not be found.

    I have tried uninstalling and reinstalling the program, deleting the files inside of the program. What else can I do?

    • What do your paths look like? Make sure they include the CUDA paths as well as the basic ones like:
      C:\Windows\system32
      C:\Windows

      I think a subset of users is encountering a path error, likely related to Ruby Shoes, where paths are getting removed and not restored.

        • From windows search, type “environmental system variables”. It opens up a window.

          There should be a “PATH” variable you can view for all users, as well as perhaps a local/user-specific PATH setting.

  45. Is it me, or is DFaker not compatible with the alignments generated with faceswap? It works the other way around, I can train and merge with faceswap using alignments generated by DFaker, but I can’t train and merge with DFaker using alignments generated by faceswap. I imagine it would be because DFaker requires more landmarks than faceswap is generating, am I correct? In any case, is there any way to make it compatible, DFaker’s extraction misses a lot of frames, while using CNN extraction with faceswap is almost perfect, except by, as you mentioned in your benchmarks, significantly higher false positives. It would be cool if I could use CNN extraction with DFaker.

    • Yes, Dfaker will not work with alignments made from other models. You can look at the .json files and see there is a slight difference in the format.

      I am puzzled why, because I thought I set it so Dfaker uses the exact same face detector as faceswap.

      Are you using DFaker from the _exp engine? The original dfaker in the winpython portable package uses a different face detector, which is a bit inferior as you stated.

      • No, I was using the experimental engine, but my alignments had been created using the python scripts, not knowing this had been modified for OFS. I’m not sure I understand though, did you modify the alignments.json format generated by faceswap to make it compatible with DFaker when using the experimental engine, or is it that they are still not compatible with each other, and while the formats are still different, DFaker is now using the same face detection as faceswap? If this is the case, I would have though that the serializer would have been more intrinsically tied to the face detection implementation, and that it would be easier to just modify its format. When I finish training I will test to see if, as you say, DFaker detection using OFS is any different from the python scripts and report back.

        • The original dfaker used a pytorch based face detector, which was pretty good, but a bit less sensitive than the current faceswap cnn detector.

          The github port of dfaker into faceswap uses a very bad hog detector. It is not “official”, but really a proof of concept.

          I modified the dfaker engine in OFS to use the regular faceswap cnn detector, which should be good quality.

  46. Also, it seems to me that in faceswap, when using the merge process, faceswap is also merging frames with false positives, even if I had deleted them from the alignments folder. I understand that it is following whatever is written on the alignments.json file, but I don’t remember this being an issue before. If I remember correctly, it used to be that whichever frames you deleted from the alignments folder would not be merged in the final output, is there a way to get that to be the case again?

    • It shouldn’t be changed from however the python scripts behaved before. You can always double-check by running the python command prompt and entering commands as before. The repo should have included the latest .json handling commits. Were you sending an extra switch/option before? If so, you will have to add it using the “custom” command box.

      You can also pull any version from the repo, copy it into a folder with the appropriate naming, set the openfaceswapconfig.txt file accordingly, and try it out.

      • Now that I remember correctly I believe this used to be the case with FakeApp, not necessarily faceswap. In any case, I just typed -h into the custom commands and it claims that if you type -a and point to the aligned images, it will perform the function I’m looking for.

  47. Also, just to let you know, there’s a bug with DFaker using the python scripts, that I know has happened to at least one other person, where after a few hours the preview will suddenly change from attempting to merge the face to a changing blob of solid colors. After a while the colors seem to settle on red and from then on the model becomes broken and, as far as I know, unfixable unless you have a backup. Sometimes it’s the third row of each image which goes completely red and sometimes it’s the second, regardless attempting the merge process while the models are on this broken red state results in blank faces in the final output. I don’t know if you’ve encountered this bug before, or if it’s not present on OFS, but if you do know what can cause this please let me know.

    I posted this bug a while ago on DFaker’s gitHub repo and no one got back to me as to what may have caused it. The only response I got was by Kellurian who mentioned he had also encountered the bug, though interestingly he says it happened to him using the original trainer, with which I’ve never had any problems.

    • That sounds like an overtraining error, but it’s not clear why it only shows up in one row or something weird like that. Is your training data set very small (like only 30 images)? I’ve never run into this issue before.

      • It’s actually very large. 7k images for set B, often a couple tens of thousands for set A. I had to modify the python code to allow it to load up to 7k images for each set. I don’t know if it would be overtraining. I am generally able to get some further detail, especially around the edges of the face or on more difficult angles, after I try to load a backup and restart training. The improvements are relatively minor, but they are noticeable when merging.

        Also by second row I mean the re-drawing preview and by third row I mean the face swap preview. The original cropped image never goes red.

        I might try going back to loading the default number of images, just to see if that helps, since I don’t really remember if I had these issues when I used the default parameters.

    • Yup, that’s good stuff. He says he will release a Windows pre-built version soon, so I’ll leave it for him to do that. Giving it a bit of time to iron out some bugs, since it is brand new. If you want a shell on top, it shouldn’t be too hard to incorporate the current GUI with that engine.

  48. I have GTX1060 3GB. Original does not work for me so I have to config it to lowmem.

    What is the difference between lowmem and Original? Will result be the same?

    • Some of the code was compiled for Intel processors with Intel-specific features like AVX support.

      You may need to build the python engine yourself, installing all the dependencies.

      You can also try the noavx engine I posted in the forums (search forums).

      But in general, I think it may not work directly for an AMD processor.

  49. Also, is there a value in having more than 1 GPU? e.g. would 2x GTX 1050 Ti perform better than 1x GTX 1070? For deep learning in general, is it better to have a single GPU that does all the work or two GPUs? does the code allow for work to run on two GPUs even?

    • Check the forums for a similar issue. The most common causes may be 1) CUDA/cuDNN installation issues or 2) path/folder issues.

  50. Hello friend, forgive me for not speaking correctly, I’m Spanish and I’m using a Google translator (You know how things go xD)

    I wanted to ask you a question My PC has a GeForce Nvidia 1080 GTX, an I7-7700K CPU (4.20 GHz) and 32 GB of RAM. Is there any high settings for my PC? Or should I leave everything predetermined?

    Thank you very much and the truth is that you have done a good job. As we say here, YOU ARE A MACHINE.

    • You can just use the original model. With your PC, you may want to play around with DFaker or even GAN128, but these are less polished. So if you are new to deepfakes, just use the original model and you will have faster training than others.

      You can possibly increase your batch size beyond the default of 64 (try 128, 256). That may help a little.

  51. Hi,

    When I use the faceswap_exp with DFaker and get all faces aligned, usually I need to delete some trash.

    The problem is when I convert I always get an “Aligned folder not found all faces in aligment file will be convert” (or something like that). Thats because an exception in conversion code (Scandir I think) so I get all trash converted too (and I think the process take much more time)

    Any idea how to fix this?

    Thanks!!!

    • Dfaker has a custom experimental conversion script, so it may not be as polished as the others in the more mature models. As for right now, you have to convert everything.

      I think inserting the face filter option may not be too difficult if you can code it in. Otherwise, if you really want to avoid converting everything, your best “easy” bet is to pre-cut the images so only one face shows up.

      • I fixed the ‘skip deleted aligned faces’ process by editing the dir folders (it was a bit mess for base folder and /aligned folder, and there was a problem with scandir)

        But this didnt save time for merging process (like 3 times the get aligned faces process)

      • As I mentioned in a different comment, I believe you can add -a followed by the aligned folder location as a custom argument and this should only convert the faces that you have inside the aligned folder.

  52. Hey thanks for making this, after the fiasco that was fakeapp and Radek not really having time to develop myfakeapp I’m glad someone else was able to step up to the plate to make an easy to use gui.

    I transitioned from myfakeapp and was able to transfer over my models, but when I tried using any sort of custom folders it gave me errors saying it was an unrecognized command, afterwards I ran through and it created folders based on a default location, my question is, are there plans to implement custom folders in the future or do I just need to add some sort of command? I am very organized and although it’s not the end of the world to have to swap out for different faces to imgA, imgB, and model folders, it’d be nice if along with the project file I could set up things so that with other projects I could quickly open another faceset and model.

    Sorry if this is something simple I’m missing but I just learned how to use the program today after transitioning from myfakeapp

    • It should work with custom folders. However, currently, spaces in your paths may mess up the python commands. Can you try custom paths without any spaces?

      • I imagine that would solve the issue because as soon as I used the default folders which have no spaces it worked, I’ll report back later, I just have a bunch of different files in different places and those folders have spaces in them, I guess I could just move everything to the openfaceswap folder and not use spaces in any of the folders, I just have a lot of sub folders for organization and I have project folders and then backups that I sync across my network so I don’t lose any data even if one of my hard drives fail. I also number the folders in terms of priority to help further organize and also things like dates.

        Is openfaceswap a program you are planning to refine in terms of the interface or are you planning to leave it as is? I don’t mean reinventing the wheel as far as the interface goes, just little things like say having the script detect spaces and just go off of the folder selection and being able to detect the amount of frames in the video when you input it. I can use it as it is now, those are just some things that come to mind that I imagine would help the end user a lot. Obviously the things like the faceswap script itself are being improved upon independently on github etc. and sorry if any of this seems ignorant I am just trying to get a sense of the development community for this technology and who is working on what. I have talked to a lot of end users but not any developers. As I understand to the public anyways it’s a pretty new emerging technology and it’s a small development community (A lot of that probably being due to the media shoehorning it into being only for revenge porn or some nonsense like that) but I see a lot of creative potential for people to do things like remake tv shows/movies and doing dubs etc.

        I don’t want to seem ungrateful, I am just curious because the one thing these types of apps could use is more polish in terms of usability to the end user who aren’t code savvy. Something along the lines of format factory for converting videos.

        Learning the program wasn’t that difficult because I had used myfakeapp for awhile and although I can’t code I’m not a dummy when it comes to computers and trouble shooting problems but I imagine to a first time user it’s a bit overwhelming and one way to grow the community is to make things more and more user friendly, it’s already a lot of manual work to make datasets (good ones anyways) the more that can be automated without detracting from the quality much if at all will bring more people (including those that are more tech savvy) to the field in my opinion just via word of mouth. A lot of people are still very ignorant of this technology even existing or being a possibility.

        • I will update it, but only when I have free time. I am more interested in working on other tools to aid the deepfakes process.

          Regarding your point about ease of use, yes, that’s true it’s a bit complicated. But so is Photoshop. A brand new beginner can’t learn to photoshop properly without a few hours of tutorials and practice. The same thing for Premiere or After Effects. No content creation tool is beginner friendly, except for Snapchat-style trivial stuff, and deepfakes is not trivial.

          That said, I think there is room for easy-to-use tools for specific purposes. I have a few ideas… if I have time to try them out.

  53. [image2 @ 000001afca44a880] Could find no file with path ‘.\merge\vlc-record-2018-04-24-19h32m03s-xvideos.com_a5f61c5ef1a26a7a41ec65fc33a550be.mp4-A%d.png’ and index in the range 0-4
    .\merge\vlc-record-2018-04-24-19h32m03s-xvideos.com_a5f61c5ef1a26a7a41ec65fc33a550be.mp4-A%d.png: No such file or directory
    Press any key to continue . . .

    Any help? Ive used model and swapped them but when I click movie i get: with the top of this comment is highlighted in red.

    ffmpeg version N-89369-g5a93a85fd0 Copyright (c) 2000-2017 the FFmpeg developers
    built with gcc 7.2.0 (GCC)
    configuration: –enable-gpl –enable-version3 –enable-sdl2 –enable-bzlib –enable-fontconfig –enable-gnutls –enable-iconv –enable-libass –enable-libbluray –enable-libfreetype –enable-libmp3lame –enable-libopenjpeg –enable-libopus –enable-libshine –enable-libsnappy –enable-libsoxr –enable-libtheora –enable-libtwolame –enable-libvpx –enable-libwavpack –enable-libwebp –enable-libx264 –enable-libx265 –enable-libxml2 –enable-libzimg –enable-lzma –enable-zlib –enable-gmp –enable-libvidstab –enable-libvorbis –enable-cuda –enable-cuvid –enable-d3d11va –enable-nvenc –enable-dxva2 –enable-avisynth –enable-libmfx
    libavutil 56. 4.100 / 56. 4.100
    libavcodec 58. 6.102 / 58. 6.102
    libavformat 58. 2.103 / 58. 2.103
    libavdevice 58. 0.100 / 58. 0.100
    libavfilter 7. 6.100 / 7. 6.100
    libswscale 5. 0.101 / 5. 0.101
    libswresample 3. 0.101 / 3. 0.101
    libpostproc 55. 0.100 / 55. 0.100
    [image2 @ 000001afca44a880] Could find no file with path ‘.\merge\vlc-record-2018-04-24-19h32m03s-xvideos.com_a5f61c5ef1a26a7a41ec65fc33a550be.mp4-A%d.png’ and index in the range 0-4
    .\merge\vlc-record-2018-04-24-19h32m03s-xvideos.com_a5f61c5ef1a26a7a41ec65fc33a550be.mp4-A%d.png: No such file or directory
    Press any key to continue . . .

    • Did you type out the prefix/filename directly, or are you using the default settings?

      It looks like the filename path may be incorrect (also check if you meant jpg instead of png).

      There is a chance the filename is messing up… can you rename the images to something simpler like image1.jpg instead of that long filename that contains a period inside.

    • Use the forum if possible. Comments are moderated due to spam and anonymous posting. If you register for the forum with an email you should be able to post at will.

  54. hi,

    I have this report :'(

    Traceback (most recent call last):
    File “faceswap\faceswap.py”, line 8, in
    from lib.cli import FullHelpArgumentParser
    File “C:\OpenFaceSwap\faceswap\lib\cli.py”, line 7, in
    from lib.FaceFilter import FaceFilter
    File “C:\OpenFaceSwap\faceswap\lib\FaceFilter.py”, line 3, in
    import face_recognition
    File “C:\OpenFaceSwap\python\python-3.6.3.amd64\lib\site-packages\face_recognition\__init__.py”, line 7, in
    from .api import load_image_file, face_locations, batch_face_locations, face_landmarks, face_encodings, compare_faces, face_distance
    File “C:\OpenFaceSwap\python\python-3.6.3.amd64\lib\site-packages\face_recognition\api.py”, line 24, in
    cnn_face_detector = dlib.cnn_face_detection_model_v1(cnn_face_detection_model)
    RuntimeError: Error while calling cudaGetDevice(&the_device_id) in file D:\pythonfs\scripts\dlib\dlib\dnn\gpu_data.cpp:178. code: 35, reason: CUDA driver version is insufficient for CUDA runtime version
    Press any key to continue . . .

    Any idea how to fix this?

    Config :

    MacBook Pro 13 8gb 256Go
    Windows 10 Pro
    Thanks!

  55. Every time I run the faces phase, the popup will run to completion, but it always says 0 faces detected. How do I fix this?

  56. Hi, just running model right now but i get this error:

    C:\OpenFaceSwap\python\python-3.6.3.amd64\lib\site-packages\h5py\__init__.py:36: FutureWarning: Conversion of the second argument of issubdtype from `float` to `np.floating` is deprecated. In future, it will be treated as `np.float64 == np.dtype(float).type`. from ._conv import register_converters as _register_converters Using TensorFlow backend. Model A Directory: C:\OpenFaceSwap\alignA Model B Directory: C:\OpenFaceSwap\alignB Training data directory: C:\OpenFaceSwap\model Loading data, this may take a while… Loading Model from Model_Original plugin… Using live preview Exception in thread Thread-1: Traceback (most recent call last): File “C:\OpenFaceSwap\python\python-3.6.3.amd64\lib\threading.py”, line 916, in _bootstrap_inner self.run() File “C:\OpenFaceSwap\python\python-3.6.3.amd64\lib\threading.py”, line 864, in run self._target(*self._args, **self._kwargs) File “C:\OpenFaceSwap\faceswap\scripts\train.py”, line 137, in processThread model = PluginLoader.get_model(trainer)(get_folder(self.arguments.model_dir)) TypeError: __init__() missing 1 required positional argument: ‘gpus’ <— This one

    I have a NVIDIA 1050 and everything else has worked so far

    • It looks like you are loading an old model. The model may not be compatible. Clear the model directory (or move the files elsewhere), and try with af resh model.

  57. It saved frames of video perfect. I cannnot figure out how to extract the faces. I am 100% sure I have cuda 9.0 and cudnn 7.05 for cuda 9.0. I also have C++ 2015 Redistibutable x64 & x86. I reinstalled faceswap after installing all the prerequisites.

    Message upon failure:

    C:\Users\paulr\Documents\Face Swap\OpenFaceSwap\python\python-3.6.3.amd64\lib\site-packages\h5py\__init__.py:36: FutureWarning: Conversion of the second argument of issubdtype from `float` to `np.floating` is deprecated. In future, it will be treated as `np.float64 == np.dtype(float).type`.
    from ._conv import register_converters as _register_converters
    Using TensorFlow backend.
    usage: faceswap.py [-h] {extract,train,convert} …

    positional arguments:
    {extract,train,convert}
    extract Extract the faces from a pictures.
    train This command trains the model for the two faces A and
    B.
    convert Convert a source image to a new one with the face
    swapped.

    optional arguments:
    -h, –help show this help message and exit
    faceswap.py: error: unrecognized arguments: Swap Input Swap Input Aligned
    Press any key to continue . . .

    • Possibly from having spaces in your home path… can you move the openfaceswap directory to a base C: or D: path (like D:\openfaceswap). Avoid spaces in the paths.

  58. Mine can get the faces for model A, but cannot get the model B. After pressing the button, the folder ./alignB is completely empty. Any reason for this?

    • Are the faces “difficult”, like side views, harsh lighting, or anything unusual? Double check that you have the images (or video) in the correct directories.

  59. C:\OpenFaceSwap\python\python-3.6.3.amd64\lib\site-packages\h5py\__init__.py:36: FutureWarning: Conversion of the second argument of issubdtype from `float` to `np.floating` is deprecated. In future, it will be treated as `np.float64 == np.dtype(float).type`.
    from ._conv import register_converters as _register_converters
    Using TensorFlow backend.
    usage: faceswap.py [-h] {extract,train,convert} …

    positional arguments:
    {extract,train,convert}
    extract Extract the faces from a pictures.
    train This command trains the model for the two faces A and
    B.
    convert Convert a source image to a new one with the face
    swapped.

    optional arguments:
    -h, –help show this help message and exit
    faceswap.py: error: unrecognized arguments: A A
    Press any key to continue . . .

    please help? cant seem to fix

  60. i3-6300 at 3.80ghz
    8gb ram
    msi 1030 aero 2gb overclocked to 1650 MHZ with 3231 MHZ Memory Clock

    using low mem trainer with any batch size always give me errors of oom or reshape errors
    with command: python\scripts\python.bat faceswap_lowmem\faceswap.py train -A F:\OpenFaceSwap\FacesA -B F:\OpenFaceSwap\FacesB -m F:\OpenFaceSwap\Model -bs 1

    • Try the _lowmem engine (change the configuration text file), and THEN try the lowmem model. That is the “lowest” “low” you can get.

      If that doesn’t work, your card is not able to handle it. Try disabling any windows aero/3D effects, close all other programs, etc. Then, try again.

  61. Hi. The software keeps giving DLL load error. I installed CUDA with nothing but itself (no drivers, no others, … only CUDA selected). Installed cuDNN. But it still doesn’t work.

    • You need the latest drivers from NVIDIA. They are not optional, as even drivers from 2017 do not cover the latest CUDA versions. You also need to copy the files correctly for cuDNN.

      Make sure you have an AVX supported CPU.

      Check your environmental paths that they look “normal” + have CUDA.

    • Are you using the _lowmem engine (change configuration text file) and then using the lowmem setting? That will give you the smallest memory usage.

      3GB is borderline, but you should be able to run the above.

      Disable any other Windows 3D effects and close other programs.

  62. that worked but when doing swaps with command python\scripts\python.bat faceswap_lowmem\faceswap.py convert -i F:\OpenFaceSwap\ImagesA -o F:\OpenFaceSwap\Swaps -m F:\OpenFaceSwap\Model -t LowMem -D cnn -S i get this error repeatedly doesnt matter which engine i use or options in gui

    Input Directory: F:\OpenFaceSwap\ImagesA
    Output Directory: F:\OpenFaceSwap\Swaps

    Reading alignments from: F:\OpenFaceSwap\ImagesA\alignments.json
    0%| | 0/35400 [00:00<?, ?it/s]Failed to convert image: F:\OpenFaceSwap\ImagesA\2331298A1.jpg. Reason: __init__() got an unexpected keyword argument 'r'
    Failed to convert image: F:\OpenFaceSwap\ImagesA\2331298A10.jpg. Reason: __init__() got an unexpected keyword argument 'r'
    Failed to convert image: F:\OpenFaceSwap\ImagesA\2331298A100.jpg. Reason: __init__() got an unexpected keyword argument 'r'
    Failed to convert image: F:\OpenFaceSwap\ImagesA\2331298A10000.jpg. Reason: __init__() got an unexpected keyword argument 'r'
    Failed to convert image: F:\OpenFaceSwap\ImagesA\2331298A10001.jpg. Reason: __init__() got an unexpected keyword argument 'r'
    Failed to convert image: F:\OpenFaceSwap\ImagesA\2331298A10002.jpg. Reason: __init__() got an unexpected keyword argument 'r'
    Failed to convert image: F:\OpenFaceSwap\ImagesA\2331298A10003.jpg. Reason: __init__() got an unexpected keyword argument 'r'
    Failed to convert image: F:\OpenFaceSwap\ImagesA\2331298A10004.jpg. Reason: __init__() got an unexpected keyword argument 'r'
    Failed to convert image: F:\OpenFaceSwap\ImagesA\2331298A10005.jpg. Reason: __init__() got an unexpected keyword argument 'r'
    0%|▏ | 76/35400 [00:00<05:41, 103.56it/s]Failed to convert image:

    • Can you test with a small video clip and use the default folders. I think something is wrong with the directories and/or filenames. Make sure there are no other files in the directories.

  63. the more the face is on the side, and the more it becomes green. the color is correct when the face is straight in front.

    • How large is your training data? Do you have enough side faces? If you have only front views and there is some green background, it may get confused with side views.

      • my data set is around 3000 images for A and B. but I think you’re right because there is only two video source for B, the facial expression is repeated and it has little side face. I will start again with a more varied data set. I will Come back with New result.

  64. When I press Faces A I keep getting unrecognised arguments. When I choose the video and press Images A I get an error “can’t find (name of file).jpg, so I chose another directory to put the frames in.

    • Nevermind the above problem, like you said before it has something to do with spaces, but what do you type in for the low memory engine?

    • Wherever you see the “faceswap” directory, instead replace with “faceswap_lowmem”. Then choose the LowMem trainer for training and conversion.

  65. Hello, my system is the same as above:

    Win 10
    GTX 1060 3GB
    32GB RAM

    and my problem is the same as above (ResourceExhausted). I used the lowmem setting with batch file = 8 (but couldn’t find the configuration text file, where is it?). I closed other programs but the problem keep happening.

    I also got this:

    “Allocator (GPU_0_bfc) ran out of memory trying to allocate 1.04GiB. The caller indicates that this is not a failure, but may mean that there could be performance gains if more memory were available.”

    Thank you for your help.

    • Close all other programs and disable any Windows 3D effects, etc. Use the _lowmem engine by changing the configuration text file. Then, choose the LowMem trainer for training/conversion.

      • I did all those things except the configuration text file, I’m sorry I don’t know where is it. I chose the LowMem in the “gear” icon at the row of “Model”.

        • It’s in the directory where you installed the program. There’s a text file. Check that paths there.

      • ah, I forgot this, when install CUDA, because the installer didn’t let me install with Visual Studio Integration (VSI), so I just unchecked it and went ahead without it. And then I saw that your program can run without the VSI, but is it necessary?

  66. Traceback (most recent call last):
    File “faceswap\faceswap.py”, line 8, in
    from lib.cli import FullHelpArgumentParser
    File “D:\Face\OpenFaceSwap\faceswap\lib\cli.py”, line 7, in
    from lib.FaceFilter import FaceFilter
    File “D:\Face\OpenFaceSwap\faceswap\lib\FaceFilter.py”, line 3, in
    import face_recognition
    File “D:\Face\OpenFaceSwap\python\python-3.6.3.amd64\lib\site-packages\face_recognition\__init__.py”, line 7, in
    from .api import load_image_file, face_locations, batch_face_locations, face_landmarks, face_encodings, compare_faces, face_distance
    File “D:\Face\OpenFaceSwap\python\python-3.6.3.amd64\lib\site-packages\face_recognition\api.py”, line 4, in
    import dlib
    ImportError: DLL load failed: A dynamic link library (DLL) initialization routine failed.
    Press any key to continue . . .

  67. unfortunately I had a terrible issue the other day and had to do a full clean reinstall Windows 10 and reinstalled everything else the same way it was before I have 0 problems extracting the faces and aligning them from video a using the GPU but as soon I try to align the faces from video b from images b Instantly get out memory error from cuda malloc using low memory engine same pc setup and overclock and this didnt before same videos to.. please help Been messing with this program for weeks with no results dont want to give up

    • Is video B at a higher resolution? You may need to resize the screencaps to a smaller resolution.

  68. So I installed the FakeApp 1.1 to test if my computer work with this. Well after 2 times with the OOM, I adjust the “batch size” = 16 and the “Nodes” = 256 in the Train tab. It worked.

    Where is the “Nodes” or which is the same as it in the OpenFaceSwap?

    Thank you.

  69. Can more than one process run at the same time? For example can I click on Faces A and Faces B and let them run at the same time? Or is doing the steps one at a time important? Thanks!

  70. Hi, great software, got it to work, but now on the exp-engine I keep getting this error while using DFaker for the Modelling and it doesnt begi the training…

    ResourceExhaustedError (see above for traceback): OOM when allocating tensor with shape[64,128,64,64] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc
    [[Node: model_2/conv2d_14/convolution = Conv2D[T=DT_FLOAT, data_format=”NCHW”, dilations=[1, 1, 1, 1], padding=”SAME”, strides=[1, 1, 1, 1], use_cudnn_on_gpu=true, _device=”/job:localhost/replica:0/task:0/device:GPU:0″](model_2/leaky_re_lu_13/sub, conv2d_14/kernel/read)]]
    Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.

    [[Node: loss/add/_665 = _Recv[client_terminated=false, recv_device=”/job:localhost/replica:0/task:0/device:CPU:0″, send_device=”/job:localhost/replica:0/task:0/device:GPU:0″, send_device_incarnation=1, tensor_name=”edge_3686_loss/add”, tensor_type=DT_FLOAT, _device=”/job:localhost/replica:0/task:0/device:CPU:0″]()]]
    Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.

    BTW, i have a GTX 1070 Ti 8gb, windows 10 and ryzen 6 core cpu…

    Help wouzld be appreciated, but nevertheless thanks for the work:)

    • Nvmnd, it was a memory problem. I only have 8gb of system ram, wich doesnt seem to be enough for dfaker, with a batch size of 16 it finally worked.

    • DFaker is extremely memory intensive. You are getting out of memory errors, even with 8GB VRAM. Try reducing your batch size to 8 or even 4.

  71. I have also the error with importing dlib. After the first installation it works. After restart my pc it stops working with the “import dlib” failure. I checked my enviroment variables and realized that my PATH variable only contains the openfaceswap installation dir. Before I installed openfaceswap it contains several paths. After that I recover my pc to that point before installing openfaceswap and watched the openfacesswap installation process and looked for any unusual output. Near the end of the installation it says my PATH variable is empty (it wasn’t empty) and set it only to the openfaceswap installation location. It should only be append something to this enviroment variable and not overwrite the whole content, isn’t it?

    • Yes, it should append not overwrite. Something went wrong with the install or other interruption. Your path should include the CUDA paths as well as the Windows ones:

      C:\Windows\system32
      C:\Windows
      C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v9.0\bin\
      C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v9.0\libnvvp\

  72. I have no problem extracting faces from video a but I cannot get the faces to extract from a set of images for images B. Keep getting this:

    C:\OpenFaceSwap\python\python-3.6.3.amd64\lib\site-packages\h5py\__init__.py:36: FutureWarning: Conversion of the second argument of issubdtype from `float` to `np.floating` is deprecated. In future, it will be treated as `np.float64 == np.dtype(float).type`.
    from ._conv import register_converters as _register_converters
    Using TensorFlow backend.
    usage: faceswap.py [-h] {extract,train,convert} …

    positional arguments:
    {extract,train,convert}
    extract Extract the faces from a pictures.
    train This command trains the model for the two faces A and
    B.
    convert Convert a source image to a new one with the face
    swapped.

    optional arguments:
    -h, –help show this help message and exit
    faceswap.py: error: unrecognized arguments: 2\photos
    Press any key to continue . . .

  73. All the steps I have done, vs. the 2015 version, CUDA and cuDNN I have installed, the version is also correct, the graphics system is also the latest, there is no problem with the hardware, the program is running without error.
    But I can’t detect face data, always 0

  74. C:\OpenFaceSwap\python\python-3.6.3.amd64\lib\site-packages\h5py\__init__.py:36: FutureWarning: Conversion of the second argument of issubdtype from `float` to `np.floating` is deprecated. In future, it will be treated as `np.float64 == np.dtype(float).type`.
    from ._conv import register_converters as _register_converters
    Using TensorFlow backend.
    Input Directory: C:\Users\ran\Desktop\why
    Output Directory: C:\OpenFaceSwap\alignA
    Filter: filter.jpg
    Using json serializer
    Starting, this may take a while…
    Loading Extract from Extract_Align plugin…
    100%|█████████████████████████████████████████████████████████ ███████████████████████████| 1/1 [00:01<00:00, 1.41s/it]
    Alignments filepath: C:\Users\ran\Desktop\why\alignments.json
    Writing alignments to: C:\Users\ran\Desktop\why\alignments.json
    ————————-
    Images found: 1
    Faces detected: 0
    ————————-
    Done!

    i cant find face
    i7 8th 940M

    • If you have feature requests, ask away, but part of the point is that you can “upgrade” the engine yourself anytime you want from the open source repos.

  75. sorry i from japanese….

    windows 10 / 64bit

    install application OpenFaceSwap
    but…
    run OpenFaceSwap
    and
    0.1 end OpenFaceSwap

    How run ‘OpenFaceSwap’ for window 10 64bit ?

    please give me solution 🙁

    • Can you show screenshots? You should be able to just run the program. If that doesn’t work, try install Shoes and opening the .rb file.

  76. OOM as well.
    Specs : 980 4GVRAM 16GB RAM
    1000 files model A
    120 files model B

    I tried downsizing to 14 then 8 with the same error

    ResourceExhaustedError (see above for traceback): OOM when allocating tensor with shape[16384,1024] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc
    [[Node: training_1/Adam/mul_42 = Mul[T=DT_FLOAT, _device=”/job:localhost/replica:0/task:0/device:GPU:0″](training_1/Adam/sub_2, training_1/Adam/gradients/model_1/dense_1/MatMul_grad/MatMul_1)]]
    Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.

    Also, I tried low mem but it didn’t work.

    I also had the same error using another GUI called MyFakeApp, so I guess it’s my system where the problem lies, but I can’t find why. Where can I look ?

  77. Hi all,
    Hello,
    I have an old dual core2 and can do anything with OFS except extract the faces. After the dlib without AVX I have installed no more error message, but no faces are found. But with MyFakeapp I can extract, there is a way to do this in OFS, which I only use a software. Although I can extract and train in parallel, I would like to have a complete package. Thank you for your support. I know that it does not make much sense to use this hardware OFS, but I’m more interested in the AI, how it works, etc.

    Dual Core 2 3Ghz / 4GB
    Nvidia GT610 / 1GB

  78. Hi,
    I’ve changed the batch = 8, and use LowMem, but I keep getting this result.
    What should I do? 🙁

    i5-8250u 4GB RAM
    GeForce 930MX 2GB

  79. I have run the Face A and Face B several time but keep getting no results and an empty file when i click on the magnifying glass. The Images A and Images B give me results just fine. When i try and do the Model it just keeps telling me that “the number of images is lower than the batch size” I assume the reason is what i stated above with not getting results from my Face A and Face B. Any assistance?

  80. Starting, this may take a while…
    Loading Extract from Extract_Align plugin…
    0%| | 0/4698 [00:00 physical GPU (device: 0, name: GeForce GTX 1060 6GB, pci bus id: 0000:01:00.0, compute capability: 6.1)
    WARNING:tensorflow:From C:\OpenFaceSwap\python\python-3.6.3.amd64\lib\site-packages\keras\backend\tensorflow_backend.py:1349: calling reduce_mean (from tensorflow.python.ops.math_ops) with keep_dims is deprecated and will be removed in a future version.
    Instructions for updating:
    keep_dims is deprecated, use keepdims instead
    2018-06-08 14:55:34.510392: E T:\src\github\tensorflow\tensorflow\stream_executor\cuda\cuda_dnn.cc:396] Loaded runtime CuDNN library: 7104 (compatibility version 7100) but source was compiled with 7003 (compatibility version 7000). If using a binary install, upgrade your CuDNN library to match. If building from sources, make sure the library loaded at runtime matches a compatible version specified during compile configuration.
    2018-06-08 14:55:34.517854: F T:\src\github\tensorflow\tensorflow\core\kernels\conv_ops.cc:712] Check failed: stream->parent()->GetConvolveAlgorithms( conv_parameters.ShouldIncludeWinogradNonfusedAlgo(), &algorithms)
    Press any key to continue . . .

    I get this message when trying to Get Faces A and Faces B. Dont know how to fix it.

  81. This is what pops up for the Model function:
    C:\OpenFaceSwap\python\python-3.6.3.amd64\lib\site-packages\h5py\__init__.py:36: FutureWarning: Conversion of the second argument of issubdtype from `float` to `np.floating` is deprecated. In future, it will be treated as `np.float64 == np.dtype(float).type`.
    from ._conv import register_converters as _register_converters
    Using TensorFlow backend.
    Model A Directory: C:\OpenFaceSwap\alignA
    Model B Directory: C:\OpenFaceSwap\alignB
    Training data directory: C:\OpenFaceSwap\model
    Loading data, this may take a while…
    Using live preview
    Loading Model from Model_Original plugin…
    WARNING:tensorflow:From C:\OpenFaceSwap\python\python-3.6.3.amd64\lib\site-packages\keras\backend\tensorflow_backend.py:1264: calling reduce_prod (from tensorflow.python.ops.math_ops) with keep_dims is deprecated and will be removed in a future version.
    Instructions for updating:
    keep_dims is deprecated, use keepdims instead
    WARNING:tensorflow:From C:\OpenFaceSwap\python\python-3.6.3.amd64\lib\site-packages\keras\backend\tensorflow_backend.py:1349: calling reduce_mean (from tensorflow.python.ops.math_ops) with keep_dims is deprecated and will be removed in a future version.
    Instructions for updating:
    keep_dims is deprecated, use keepdims instead
    Failed loading existing training data.
    Unable to open file (unable to open file: name = ‘C:\OpenFaceSwap\model\encoder.h5’, errno = 2, error message = ‘No such file or directory’, flags = 0, o_flags = 0)
    Loading Trainer from Model_Original plugin…
    Starting. Press “Enter” to stop training and save model
    Exception in thread Thread-3:
    Traceback (most recent call last):
    File “C:\OpenFaceSwap\python\python-3.6.3.amd64\lib\threading.py”, line 916, in _bootstrap_inner
    self.run()
    File “C:\OpenFaceSwap\faceswap\lib\utils.py”, line 64, in run
    for item in self.generator:
    File “C:\OpenFaceSwap\faceswap\lib\training_data.py”, line 23, in minibatch
    assert length >= batchsize, “Number of images is lower than batch-size (Note that too few images may lead to bad training). # images: {}, batch-size: {}”.format(length, batchsize)
    AssertionError: Number of images is lower than batch-size (Note that too few images may lead to bad training). # images: 18, batch-size: 64
    Nothing happens after this, and pressing enter does nothing. What should I do?

    • Increase the number of images beyond 18. You can’t really train models on that few images….

      If that’s all you have, you can copy/paste the images to make at least 64, where there will be duplicates (with different filenames).

  82. Is it normal to have a scratching sound while using openfaceswap? Can’t determine where though but it’s coming from the rear of the casing so probably my GPU.

    • Scratching? No, but your GPU fan may be running, or your hard drive platter may be spinning (louder on older models).

  83. When I click the Model button, the GUI window stops responding (as always), and I don’t know where to go look for the “printed loss value” and the previews. Where are those ? Is it normal that the GUI stops reponding whenever the CMD pops up ?

    • Yes, you can’t click on the GUI while a command is running. The loss values should appear in the cmd window. A separate window should show the previews.

    • CNN and GAN are unrelated. CNN is a method for face detection. Shaoanlu uses different methods for face detection. They aren’t included in the faceswap repo.

  84. Images and Faces work perfectly well, but Model gives a strange thing, and the CMD stops and doesn’t use any CPU or GPU…
    The GUI window freezes (as it did before, but before, it gave results…) and the CMD shows me this :
    https://pastebin.com/K7DivRnV
    Also the ./model folder remains completelky empty.

  85. I’m sorry if I’m asking a dumb question, but can you use this to swap faces from image sets? I used to be able to do that with Fakeapp, but that stopped working on my computer. I don’t want to use Fakeapp2, but am not as into converting movies as I am stills.

    • Yes, you can use image sets. Just skip the video part and pick the correct folders for the “Image” part.

  86. This is what I get when I hit “Swaps”
    hile calling cudaMalloc(&data, new_size*sizeof(float)) in file D:\pythonfs\scripts\dlib\dlib\dnn\gpu_data.cpp:195. code: 2, reason: out of memory
    Failed to convert image: C:\OpenFaceSwap\imgA\480P_600K_163348682A10269.jpg. Reason: Error while calling cudaGetLastError() in file D:\pythonfs\scripts\dlib\dlib\dnn\gpu_data.cpp:117. code: 2, reason: out of memory
    3%|█▉ | 304/12003 [00:48<31:20, 6.22it/s]Failed to convert image: C:\OpenFaceSwap\imgA\480P_600K_163348682A1027.jpg. Reason: Error while calling cudaMalloc(&data, new_size*sizeof(float)) in file D:\pythonfs\scripts\dlib\dlib\dnn\gpu_data.cpp:195. code: 2, reason: out of memory
    Failed to convert image: C:\OpenFaceSwap\imgA\480P_600K_163348682A10270.jpg. Reason: Error while calling cudaGetLastError() in file D:\pythonfs\scripts\dlib\dlib\dnn\gpu_data.cpp:117. code: 2, reason: out of memory
    Failed to convert image: C:\OpenFaceSwap\imgA\480P_600K_163348682A10271.jpg. Reason: Error while calling cudaMalloc(&data, new_size*sizeof(float)) in file D:\pythonfs\scripts\dlib\dlib\dnn\gpu_data.cpp:195. code: 2, reason: out of memory
    Failed to convert image: C:\OpenFaceSwap\imgA\480P_600K_163348682A10272.jpg. Reason: Error while calling cudaGetLastError() in file D:\pythonfs\scripts\dlib\dlib\dnn\gpu_data.cpp:117. code: 2, reason: out of memory
    3%|█▉ | 307/12003 [00:48<31:05, 6.27it/s]Failed to convert image: C:\OpenFaceSwap\imgA\480P_600K_163348682A10273.jpg. Reason: Error while calling cudaMalloc(&data, new_size*sizeof(float)) in file D:\pythonfs\scripts\dlib\dlib\dnn\gpu_data.cpp:195. code: 2, reason: out of memory
    Failed to convert image: C:\OpenFaceSwap\imgA\480P_600K_163348682A10274.jpg. Reason: Error while calling cudaGetLastError() in file D:\pythonfs\scripts\dlib\dlib\dnn\gpu_data.cpp:117. code: 2, reason: out of memory
    3%|██ | 310/12003 [00:49<30:53, 6.31it/s]Failed to convert image: C:\OpenFaceSwap\imgA\480P_600K_163348682A10275.jpg. Reason: Error while calling cudaMalloc(&data, new_size*sizeof(float)) in file D:\pythonfs\scripts\dlib\dlib\dnn\gpu_data.cpp:195. code: 2, reason: out of memory
    Failed to convert image: C:\OpenFaceSwap\imgA\480P_600K_163348682A10276.jpg. Reason: Error while calling cudaGetLastError() in file D:\pythonfs\scripts\dlib\dlib\dnn\gpu_data.cpp:117. code: 2, reason: out of memory
    Failed to convert image: C:\OpenFaceSwap\imgA\480P_600K_163348682A10277.jpg. Reason: Error while calling cudaMalloc(&data, new_size*sizeof(float)) in file D:\pythonfs\scripts\dlib\dlib\dnn\gpu_data.cpp:195. code: 2, reason: out of memory
    Failed to convert image: C:\OpenFaceSwap\imgA\480P_600K_163348682A10278.jpg. Reason: Error while calling cudaGetLastError() in file D:\pythonfs\scripts\dlib\dlib\dnn\gpu_data.cpp:117. code: 2, reason: out of memory
    3%|██ | 313/12003 [00:49<30:39, 6.36it/s]Failed to convert image: C:\OpenFaceSwap\imgA\480P_600K_163348682A10279.jpg. Reason: Error while calling cudaMalloc(&data, new_size*sizeof(float)) in file D:\pythonfs\scripts\dlib\dlib\dnn\gpu_data.cpp:195. code: 2, reason: out of memory
    I have a GTX 1060 with 8 gb ram, so it definitely is not a memory issue.

    • A 1060 only has 6, not 8 GB max. This might be a regular RAM error. Try converting a smaller set of images (like 1000) instead of 12000 and see if that works.

  87. Hello. I just started using Openface; I’m still not much used to the software, nor am I too technically inclined. I have downloaded all the required drivers (them being the correct versions, as far as I know). I have tested the Openface on 3 seperate videos. However, although all three of them have clearly defined faces within, all 3 videos resulted in a 0 face undetected using the FaceA option (after doing the video and image frame steps, of course.)

    Here is a screenshot of the command prompt screen I always receive.

    https://imgur.com/00zMfab

    Do tell me if I have made an error somewhere.

  88. when i go to click images A it works, extracting all the frames from the video. however, when i go to click Faces A, cmd pops up for half a second and disappears, doing nothing at all.

  89. Whenever I hit Movie, I get this:
    ffmpeg version N-89369-g5a93a85fd0 Copyright (c) 2000-2017 the FFmpeg developers
    built with gcc 7.2.0 (GCC)
    configuration: –enable-gpl –enable-version3 –enable-sdl2 –enable-bzlib –enable-fontconfig –enable-gnutls –enable-iconv –enable-libass –enable-libbluray –enable-libfreetype –enable-libmp3lame –enable-libopenjpeg –enable-libopus –enable-libshine –enable-libsnappy –enable-libsoxr –enable-libtheora –enable-libtwolame –enable-libvpx –enable-libwavpack –enable-libwebp –enable-libx264 –enable-libx265 –enable-libxml2 –enable-libzimg –enable-lzma –enable-zlib –enable-gmp –enable-libvidstab –enable-libvorbis –enable-cuda –enable-cuvid –enable-d3d11va –enable-nvenc –enable-dxva2 –enable-avisynth –enable-libmfx
    libavutil 56. 4.100 / 56. 4.100
    libavcodec 58. 6.102 / 58. 6.102
    libavformat 58. 2.103 / 58. 2.103
    libavdevice 58. 0.100 / 58. 0.100
    libavfilter 7. 6.100 / 7. 6.100
    libswscale 5. 0.101 / 5. 0.101
    libswresample 3. 0.101 / 3. 0.101
    libpostproc 55. 0.100 / 55. 0.100
    [image2 @ 0000026d8663a680] Could find no file with path ‘.\merge\SecondLinearJapanesebeetle%d.png’ and index in the range 0-4
    .\merge\SecondLinearJapanesebeetle%d.png: No such file or directory
    Press any key to continue . . .
    The command is
    ffmpeg -r 25 -f image2 -y -i “.\merge\SecondLinearJapanesebeetle”%d.png -vcodec libx264 -crf 15 -pix_fmt yuv420p “480P_600K_163348682_A”
    And the images start with “SecondLinearJapanesebeetleA1.png” and continue.
    What am I missing?

    • If you change the settings manually, the “auto” settings won’t always work.

      Manually change the “prefix” to SecondLinearJapanesebeetleA (note the extra A)

      Or change the entire command to

      ffmpeg -r 25 -f image2 -y -i “.\merge\SecondLinearJapanesebeetleA”%d.png -vcodec libx264 -crf 15 -pix_fmt yuv420p “480P_600K_163348682_A”

  90. I would like to swap just images, not videos. Can this be done with this program? I used to use FakeApp version 1, but it doesn’t work for me any more, and would like to try something different.

    • Yes, skip the video part, and point to the image folders for the “image” sections. You still need enough data to train properly.

  91. Help, not tech savy as I thought it was. It works until the point where I select “Faces A” or “Faces B” but nothing is extracted. Here is the message that I get:

    C:\OpenFaceSwap\python\python-3.6.3.amd64\lib\site-packages\h5py\__init__.py:36: FutureWarning: Conversion of the second argument of issubdtype from `float` to `np.floating` is deprecated. In future, it will be treated as `np.float64 == np.dtype(float).type`.
    from ._conv import register_converters as _register_converters
    Using TensorFlow backend.
    Input Directory: C:\OpenFaceSwap\imgA
    Output Directory: C:\OpenFaceSwap\alignA
    Filter: filter.jpg
    Using json serializer
    Starting, this may take a while…
    Loading Extract from Extract_Align plugin…
    0%| | 0/34536 [00:00 physical GPU (device: 0, name: GeForce GTX 1080, pci bus id: 0000:01:00.0, compute capability: 6.1)
    WARNING:tensorflow:From C:\OpenFaceSwap\python\python-3.6.3.amd64\lib\site-packages\keras\backend\tensorflow_backend.py:1349: calling reduce_mean (from tensorflow.python.ops.math_ops) with keep_dims is deprecated and will be removed in a future version.
    Instructions for updating:
    keep_dims is deprecated, use keepdims instead
    2018-06-20 00:22:15.569480: E T:\src\github\tensorflow\tensorflow\stream_executor\cuda\cuda_dnn.cc:396] Loaded runtime CuDNN library: 7104 (compatibility version 7100) but source was compiled with 7003 (compatibility version 7000). If using a binary install, upgrade your CuDNN library to match. If building from sources, make sure the library loaded at runtime matches a compatible version specified during compile configuration.
    2018-06-20 00:22:15.577315: F T:\src\github\tensorflow\tensorflow\core\kernels\conv_ops.cc:712] Check failed: stream->parent()->GetConvolveAlgorithms( conv_parameters.ShouldIncludeWinogradNonfusedAlgo(), &algorithms)
    Press any key to continue . . .

  92. Do you know how I can get around this?
    _______________________________________________________________________________________________
    Exception in thread Thread-1:
    Traceback (most recent call last):
    File “S:\Program Files (x86)\OpenFaceSwap\python\python-3.6.3.amd64\lib\threading.py”, line 916, in _bootstrap_inner
    self.run()
    File “S:\Program Files (x86)\OpenFaceSwap\python\python-3.6.3.amd64\lib\threading.py”, line 864, in run
    self._target(*self._args, **self._kwargs)
    File “S:\Program Files (x86)\OpenFaceSwap\faceswap_exp\scripts\train.py”, line 181, in processThread
    raise e
    File “S:\Program Files (x86)\OpenFaceSwap\faceswap_exp\scripts\train.py”, line 153, in processThread
    trainer = trainer(model, images_A, images_B, self.arguments.batch_size, self.arguments.perceptual_loss)
    File “S:\Program Files (x86)\OpenFaceSwap\faceswap_exp\plugins\Model_DFaker\Trainer.py”, line 74, in __init__
    images_A[:,:,:3] += images_B[:,:,:3].mean( axis=(0,1,2) ) – images_A[:,:,:3].mean( axis=(0,1,2) )
    IndexError: too many indices for array
    _______________________________________________________________________________________________
    I know it’s happening because I have too many images, but this only happens when using dfaker on OpenFaceSwap’s experimental engine. If I use dfaker’s python scripts with the same number of images, training will run perfectly fine. It seems to be a limitation in the coding, not a limitation of memory. Is there something I could change in that last line of code to fix this? Thanks

    • I don’t think it has to do with the number of images. Is one of the extra images you are adding the wrong size, unprocessed, etc.?

      You need to re-extract all images using the experimental engine.

      • Hmmm, you’re right. I don’t know what went wrong, but I re-extracted all the images and this time it did give me the memory error I generally get using OpenFaceSwap. I would like to do train with OFS, as it supports training from cnn extractions, unlike the python scripts, but the problem is that I can load a lot more images into the python scripts before I get memory errors compared to OFS. With the python scripts I have never run into any problems loading images, but using OFS it seems to be limited to somewhere around 14k images, at least on my machine. Now, my understanding is that the model is only going to use a random subset of the loaded images, not all of them. It could be that OFS is accidentally ignoring this and actually trying to use all of the images to train the model. I say this because it just happens to be that on the python scripts I have changed a line to this:
        minImages = 7000#min(len(fn_A),len(fn_B))*20
        So, my understanding is that this means that training will use up to 7k images for subject A, and up to 7k images for subject B to train the model. That means a total of up to 14k images. I set that 7k limit because any more caused memory errors using the python scripts. So, you can probably see the similarity. Loading more than 14k images into OFS, even when minImages is set to the default 2k, also causes the memory errors.

  93. Hey,

    I wanted to use Dfaker, so I added the _exp in the openfaceswapconfig.txt as following:

    Base align B command
    python\scripts\python.bat faceswap_exp\faceswap.py extract
    Base train command
    python\scripts\python.bat faceswap_exp\faceswap.py train
    Base merge command
    python\scripts\python.bat faceswap_exp\faceswap.py convert

    It won’t get beyond the Loading of Model_DFaker plugin
    How do I get it to work?

    Here is the complete procedure:
    https://pastebin.com/xSG9abe7

  94. Instructions for updating:
    keep_dims is deprecated, use keepdims instead
    WARNING:tensorflow:From C:\OpenFaceSwap\python\python-3.6.3.amd64\lib\site-packages\keras\backend\tensorflow_backend.py:1349: calling reduce_mean (from tensorflow.python.ops.math_ops) with keep_dims is deprecated and will be removed in a future version.
    Instructions for updating:
    keep_dims is deprecated, use keepdims instead
    Failed loading existing training data.
    Unable to open file (unable to open file: name = ‘C:\OpenFaceSwap\model\encoder.h5’, errno = 2, error message = ‘No such file or directory’, flags = 0, o_flags = 0)
    Loading Trainer from Model_Original plugin…
    Starting. Press “Enter” to stop training and save model
    Exception in thread Thread-2:
    Traceback (most recent call last):
    File “C:\OpenFaceSwap\python\python-3.6.3.amd64\lib\threading.py”, line 916, in _bootstrap_inner
    self.run()
    File “C:\OpenFaceSwap\faceswap\lib\utils.py”, line 64, in run
    for item in self.generator:
    File “C:\OpenFaceSwap\faceswap\lib\training_data.py”, line 23, in minibatch
    assert length >= batchsize, “Number of images is lower than batch-size (Note that too few images may lead to bad training). # images: {}, batch-size: {}”.format(length, batchsize)
    AssertionError: Number of images is lower than batch-size (Note that too few images may lead to bad training). # images: 0, batch-size: 64

    stuck at Model, nothing is happening

    • The error says your batch size is 64 but you have 0 images.

      Check your paths. Make sure you extract the faces. And have at least 64 FACES, not just images, present.

      If your face extraction fails, see the other comments above about 0 faces detected.

  95. Hi, I followed all the steps in the guide but I have a problem from the start:
    when I try to install CUDA 9.0 it gives me an installation error. Searched for it and found the cause are Visual Studio Integrations (it seems to be some problems with Microsoft Visual Studio Redistributable 2017, but I’m using 2015 so I don’t know why it gives me this error). So, I tried to install CUDA without them and everything went fine, but when I use openswapfaces and click on FACES A (or B) it doesn’t recognize faces (find the images but still 0 faces) and I think this is probably caused by the lack of Visual Studio Integrations.
    What can I do? Thanks

    • You don’t need VS to install CUDA. There’s a warning, but you can ignore it.

      You do need to install cuDNN correctly (copy version 7.0 to correct folder). Also see the above comments about 0 faces.

  96. Thank you for first for your great app!
    How about GAN2 from author? It has some additional req, i know, but would be greate to try it out without installing all python libraries.

  97. The training is ok and the faces seem close enough, but the skin tone is off. The face itself is well positioned and the expressions are correct but the program is aligning the entire picture instead of only the face so there is a square around the face … Am i doing something wrong? Will skin tone and squares get fixed if i keep training?

  98. It’s me again.
    Nothing seems to work and I don’t understand why.
    I also solved the CUDA problem, asked on the forum and no one can help me. Everything I do take me to the same error: 0 faces detected.
    What can I do?

  99. Fix for OOM / no AVX – working on core 2 duo

    People getting the dll error and oom error can fix all errors by doing the following:

    1: dll error / no avx – download the above compile with no avx, replace your python folder with the new download.

    now you pass the dll error and get the oom error

    2: navigate to the low mem folder and edit the model.py in plugins so the encoder value = 150 not 512 (first variable defined in the file), always run in lowmem mode, now having reduced the node value you will not get an oom error, tested on E5200 inte (core 2 duo no avx)l 4gb system ram, nvidia 730 gpu, 2gb vram.

    I find the buttons do nothing in the application but copying the extract and train commands to a cmd window in the c:/openfaceswap folder DOES work, avx is not required more than 2gb vram is not required, models train well but some application development is needed to implement a nodes variable into the train command instead of a hardcoded value, another bonus would be fixing the issue where a folder name containing a space causes the app to bomb out, ie a command with a folder named as c:/fakes/project will run but a folder named “c:/fakes/project – 1” will crap out.

  100. Just curious, I’m thinking of trying this app over fakeapp, will I need to extract my faceset again using this, or can I reuse my aligned faces I did with fakeapp 2.2?

    • The basic engine will probably work – although the json format for merging may or may not. For training it should be fine. For the experimental engine (dfaker, etc), you need to extract again.

  101. I received the following errors while training. What can I do?

    np.dtype(float).type`.
    from ._conv import register_converters as _register_converters
    Using TensorFlow backend.
    WARNING:tensorflow:From C:\OpenFaceSwap\python\python-3.6.3.amd64\lib\site-packages\keras\backend\tensorflow_backend.py:1264: calling reduce_prod (from tensorflow.python.ops.math_ops) with keep_dims is deprecated and will be removed in a future version.
    Instructions for updating:
    keep_dims is deprecated, use keepdims instead
    WARNING:tensorflow:From C:\OpenFaceSwap\python\python-3.6.3.amd64\lib\site-packages\keras\backend\tensorflow_backend.py:1349: calling reduce_mean (from tensorflow.python.ops.math_ops) with keep_dims is deprecated and will be removed in a future version.
    Instructions for updating:
    keep_dims is deprecated, use keepdims instead
    Failed loading existing training data.
    Unable to open file (unable to open file: name = ‘C:\OpenFaceSwap\model\encoder.h5’, errno = 2, error message = ‘No such file or directory’, flags = 0, o_flags = 0)
    Loading Trainer from Model_LowMem plugin…
    Starting. Press “Enter” to stop training and save model
    2018-07-09 14:22:23.045785: I T:\src\github\tensorflow\tensorflow\core\platform\cpu_feature_guard.cc:140] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2
    2018-07-09 14:22:23.051682: I T:\src\github\tensorflow\tensorflow\core\common_runtime\gpu\gpu_device.cc:1344] Found device 0 with properties:
    name: GeForce GTX 1060 3GB major: 6 minor: 1 memoryClockRate(GHz): 1.7085
    pciBusID: 0000:01:00.0
    totalMemory: 3.00GiB freeMemory: 2.34GiB
    2018-07-09 14:22:23.057747: I T:\src\github\tensorflow\tensorflow\core\common_runtime\gpu\gpu_device.cc:1423] Adding visible gpu devices: 0
    2018-07-09 14:22:23.566666: I T:\src\github\tensorflow\tensorflow\core\common_runtime\gpu\gpu_device.cc:911] Device interconnect StreamExecutor with strength 1 edge matrix:
    2018-07-09 14:22:23.569518: I T:\src\github\tensorflow\tensorflow\core\common_runtime\gpu\gpu_device.cc:917] 0
    2018-07-09 14:22:23.570916: I T:\src\github\tensorflow\tensorflow\core\common_runtime\gpu\gpu_device.cc:930] 0: N
    2018-07-09 14:22:23.573304: I T:\src\github\tensorflow\tensorflow\core\common_runtime\gpu\gpu_device.cc:1041] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 2049 MB memory) -> physical GPU (device: 0, name: GeForce GTX 1060 3GB, pci bus id: 0000:01:00.0, compute capability: 6.1)
    2018-07-09 14:22:25.817162: W T:\src\github\tensorflow\tensorflow\core\common_runtime\bfc_allocator.cc:219] Allocator (GPU_0_bfc) ran out of memory trying to allocate 2.15GiB. The caller indicates that this is not a failure, but may mean that there could be performance gains if more memory were available.
    cannot reshape array of size 294912 into shape (4,7,3,64,64,3)

    • That’s a low memory error. Reduce your model complexity, try the low_mem engine, etc. See the forums.

  102. Sorry for the amateur question, but where is the configuration text file? I can’t seem to make it out in the installation folder. I’m trying to use the LowMem version.

  103. Everything was good up until “Swaps.” I’m getting the error:
    Input Directory: C:\OpenFaceSwap\imgA
    Output Directory: C:\OpenFaceSwap\merge
    Using json serializer
    Starting, this may take a while…
    Loading Model from Model_Original plugin…
    C:\OpenFaceSwap\python\python-3.6.3.amd64\lib\site-packages\h5py\__init__.py:36: FutureWarning: Conversion of the second argument of issubdtype from `float` to `np.floating` is deprecated. In future, it will be treated as `np.float64 == np.dtype(float).type`.
    from ._conv import register_converters as _register_converters
    Using TensorFlow backend.
    WARNING:tensorflow:From C:\OpenFaceSwap\python\python-3.6.3.amd64\lib\site-packages\keras\backend\tensorflow_backend.py:1264: calling reduce_prod (from tensorflow.python.ops.math_ops) with keep_dims is deprecated and will be removed in a future version.
    Instructions for updating:
    keep_dims is deprecated, use keepdims instead
    WARNING:tensorflow:From C:\OpenFaceSwap\python\python-3.6.3.amd64\lib\site-packages\keras\backend\tensorflow_backend.py:1349: calling reduce_mean (from tensorflow.python.ops.math_ops) with keep_dims is deprecated and will be removed in a future version.
    Instructions for updating:
    keep_dims is deprecated, use keepdims instead
    Failed loading existing training data.
    Dimension 1 in both shapes must be equal, but are 1024 and 256. Shapes are [16384,1024] and [16384,256]. for ‘Assign_8’ (op: ‘Assign’) with input shapes: [16384,1024], [16384,256].
    Model Not Found! A valid model must be provided to continue!
    Press any key to continue . . .

    Any advice?

    • Did you extract all the faces correctly with the program on the images you intend to swap into? You need proper alignments.json files for the merge to work.

  104. I started a new project for a different image set with a different video than my first try. However the final step continues to output the same video from my previous project.

    Any ideas on what to do?

    • Check your paths. Also, if you are using default locations, move the old files or delete them with the trash icon.

  105. Installed it all but when I click on Video A i get this error?

    (OpenFaceSwap.exe9464): GLib-GIO-ERROR **: No GSettings schemas are installed on the system

    What should i do?

    • This doesn’t sound like a Windows error? You need Win10. Otherwise, I’ve never heard of this error before.

  106. Face detected : 0 when I use cnn detector. but there is no problem when I use hog detector, but eror when I click model. Always found this error “ResourceExhaustedError”, even when I used 4 batch size and LowMem Trainer.
    What’s wrong with my GPU?
    I installed all mentioned software above, and update my NVIDIA driver, and already check environmental, cuDNN, etc.

    I have NVIDIA 940mx with 5.0 compute capability
    6GB VRAM
    8 GB RAM
    i7 7500 processor

  107. Just a Noob question, looking for some advise.

    I’ve been getting fairly poor results even though my model numbers are below .2. I’m thinking it has to do with my image sets. My Faces B folder has ~3000 high quality face pics so I’m thinking that is fine. When I extract Images A from Video A I end up with say 30000 images, when I extract the Faces A from Images A I end up with say 25000 Face pics.

    Now here is my question. From the Images A folder I have been removing only pics that aren’t of the actress on interest or where her face is covered but I am still left with say 20000 images. Is this the right approach or should I be removing significantly more to distill my Faces A image set down to say 3000 high quality images?

    Thanks for any advise folks!

    BTW I am running:
    1080 GTX w 8GB VRAM
    8GB DDR3 system RAM
    i5-4670k CPU

    • The distribution of angles, lighting, and so on is more important than some absolute number of training data. Ideally, every face angle you want to match in Video A is present in VIdeo B. Don’t waste time training the same face angle over and over, if you are missing a side view, for example.

  108. Loving the GUI so far! Got it working perfectly on an old i5 750 + 970 on windows 7 using the noAVX version and the CMD trick mentioned by Warpigz.

    I do have a question about the models though. How would i go about making a second fake with the same video, images and faces B but using a different clip for the A’s?
    Do i need to retrain the model completely or is there a shortcut to save on some training time? Same people but different videos basically.

    • You can re-use the same model. You just have to extract the new frames from the new videos.

      Make sure you do not overwrite the old model – make a backup copy just in case. Then, point to your existing model when you are training.

  109. Traceback (most recent call last):
    File “faceswap\faceswap.py”, line 8, in
    from lib.cli import FullHelpArgumentParser
    File “C:\OpenFaceSwap\faceswap\lib\cli.py”, line 7, in
    from lib.FaceFilter import FaceFilter
    File “C:\OpenFaceSwap\faceswap\lib\FaceFilter.py”, line 3, in
    import face_recognition
    File “C:\OpenFaceSwap\python\python-3.6.3.amd64\lib\site-packages\face_recognition\__init__.py”, line 7, in
    from .api import load_image_file, face_locations, batch_face_locations, face_landmarks, face_encodings, compare_faces, face_distance
    File “C:\OpenFaceSwap\python\python-3.6.3.amd64\lib\site-packages\face_recognition\api.py”, line 4, in
    import dlib
    ImportError: DLL load failed: The specified module could not be found.
    Press any key to continue . . .

    I’m having trouble getting faces to register.
    NVIDIA GEForce GTX 1070
    16GB RAM
    i7-6700 CPU

    • Most likely a CUDA/cuDNN/driver install issue. Double-check it matches exactly as it should. Also double-check your windows paths.

  110. width not divisible by 2 (499×281)
    Error initializing output stream 0:0 — Error while opening encoder for output stream #0:0 – maybe incorrect parameters such as bit_rate, rate, width or height

    I’m getting this error message when I try and use the Movie tool.

  111. How will we know if there is any update for any part of the software package? 😀 I hear people talking about updating from git but I’m unsure what that means or how to do it. I downloaded OFS a few months ago if that helps.

    • You can follow the main github for “faceswap” on github. That is the “engine” portion of the software.

  112. Can you try and make some kind of hack between colab.research(google) and your awesome code. They are providing k80. Will greatly help people with crappy laptops like me.

  113. I am a traditional sculptor who would like to employ this software to capture lots of angles of faces for reference. Is there a way to have the output (after processing the frames to find the faces) match the native resolution of the video source? The current size of 256×256 makes them a little on the small side for working from while sculpting. Many thanks for such a fantastic tool, having a GUI is invaluable.

Leave a Comment