Python faceswap quick start guide

How to start using the python faceswap scripts in Windows

The open source FakeApp alternatives for creating deepfakes provide faster training speeds, high quality faceswaps, and other features, including better security. If you have read my benchmarks post or kept track of the latest open source developments, switching to the python scripts makes perfect sense. This post will give a quick overview of how to get started with the python scripts.

General prerequisites

This guide assumes you are using a Windows 10 operating system, Intel CPU, and modern NVIDIA graphics card. You should also have already installed:

  • CUDA 9.0, not 9.1 (Available here)
  • cudNN 7, not 7.1 (You have to register here to download this)
  • Latest graphics card drivers (Available here)
  • vc_redist.x64 2015 (Google this)

Note that there are a few differences from the original tutorial. I now recommend you install CUDA 9 instead of CUDA 8. You should NOT install CUDA 9.1, as NVIDIA does not recommend it for Tensorflow 1.5. To install cudNN, follow the instructions here, and do NOT install 7.1 or higher. After extracting the cuDNN files, you will have to copy a 3 images to 3 separate CUDA directories. Make sure that your paths are set correctly. You may wish to uninstall CUDA 8 first if you still have it on your system.

You will also need to update your graphics card drivers, as older drivers do not support CUDA 9.

Python and scripts installation

You can either install python and the scripts the easy way or the hard way.

Portable python installation

For the way, simply download the portable WinPython package I created from this link. Extract the zip files and you should see a directory called “pythondf”. Within the folder pythondf\python-3.6.3.amd64, you will see 3 directories labeled “df”, “faceswap”, and “faceswap_lowmem”. These are the root directories of github repositories. You’re all set with installation. Skip ahead.

If you would like to install Python and the scripts yourself, here are the steps you need to take. These instructions will install both the faceswap and dfaker github repositories.

Manual Anaconda installation

Install the latest version of Anaconda that supports python 3.6 from here.

Now run the Anaconda command prompt. This is different from the general Windows command prompt, and you can find it by type “prompt” in the Windows search bar.

Install dependencies

Create a virtual environment from within conda. You can name it “myenv” or something else.

conda create -n myenv python=3.6 numpy pyyaml mkl
conda install -c peterjc123 pytorch cuda90

This install pytorch, which you need for the dfaker repo. If you only want to install faceswap, you can skip that last command (but you should still create a virtual environment).

Now, you will need to install the following to compile dlib with GPU support:

Make sure that your environment is active, if it isn’t already.

activate myenv

You may wish to create a new directory. Once you are in your desired project directory, clone the dlib library.

git clone

You will download new directory named “dlib”. Enter the directory and compile as follows.

cd dlib
python install --yes USE_AVX_INSTRUCTIONS --yes DLIB_USE_CUDA

You can now install the rest of the required packages.

pip install pathlib==1.0.1
pip install scandir==1.6
pip install h5py==2.7.1
pip install Keras==2.1.2
pip install opencv-python==
pip install tensorflow-gpu==1.5.0
pip install scikit-image
pip install tqdm

It is not required in the latest commit of faceswap, but if you like, you can install:

pip install face_recognition

Install faceswap

Return to your project’s root directory.

git clone

This creates a directory named “faceswap”.

Install dfaker

If you would like to install the dfaker repo you need to do the following.

git clone
cd df
git clone
cd keras-contrib
python install
cd ..
git clone
cd face-alignment
python install

If you would like to activate GPU-based face extraction, in the file, comment out the line below by inserting a pound symbol:

#dlib.cnn_face_detection_model_v1 = monkey_patch_face_detector

You are now done installing the python scripts manually.

Faceswap guide

You can find a starting guide to faceswap at the original github respository here. This should cover basic tasks. If you have further questions you can ask in the forums or comments below.

Here are a few additional tips for beginners:

  • Remember that you must always start the Anaconda command prompt, and then activate your virtual environment.
  • The paths in the usage guide follow Linux conventions. For simplicity, just type out the full path each time you need a path and you will be set (i.e. C:\myproj\photos\A).
  • Execute the commands from within the “faceswap” directory that contains the file “”.

Dfaker guide

Unlike faceswap, dfaker is mostly developed by one person, so it isn’t as polished for general use. You will have to change the code manually to adjust options, for example.

To extract faces, type the command from the df directory while using the Anaconda prompt and activating your virtual environment:

python image_directory --file-type png

where image_directory is the path containing your images to extract. The default filetype is jpg, so you can leave out the last part if you are extracting from jpg files.

This will create a new folder named “aligned” and file named “alignments.json” within your original image path. If this is for face A, you need to manually copy the aligned images AND the alignments.json file into a folder located at df\data\A. For example, if your installed the df repo in D:\, you need to copy the aligned images and .json file to D:\df\data\A.

Repeat on a second directory containing face B, and again manually copy your files to the location df\data\B.

To train your model, simply enter the command below from the df directory:


If you need to change the batch size, you will have to manually edit your file where it says:

batch_size = int(32)

When done, run the command: image_directory

on the face A directory that you want to convert into face B. There are no mask or blurring options, as the settings are hardcoded into a more complicated algorithm.

Create the final video as usual with ffmpeg.

If you need to adjust the conversion settings, I would suggest using After Effects or another video editor to manually merge the original and converted footage together. Because you obtain a much larger face area, you have more room to mask the face in other software. Plus, if you spend a long time training your model, it’s worth spending the extra hour or so to merge the video properly.

Future updates

Hopefully, this is enough to more people started on the faceswap and dfaker scripts. I’ll update this guide as I have more time in the future.

32 thoughts on “Python faceswap quick start guide

  1. Thank you very much for the guide!

    Just wanted to let people know that when using the portable version you provided, although extracting the images works with the default folder structure, training does not. I figured out that if you move them to the df directory within the portable install (so it would be D:\pythondf\python-3.6.3.amd64\df\data\A in your example) it will work then. Same for the models folder too.

    I’m sure this is going to make me look like an idiot, but I don’t understand your entire last paragraph. What do you mean by “If you need to adjust the conversion settings”? When you talk about using After Effect to “manually merge the original and converted footage together”, are you just talking about the parts of the video that had no frames? It must be something else, cuz ffmpeg would work fine for that. And what do you mean by “masking the face”? I also don’t understand how using ffmpeg is not merging the video properly.

    • You can’t adjust the settings for blending the face in dfaker – like seamless, blur, erosion in faceswap. It will use its own algorithm to create the final blend, which may still have a slight seam where the new face connects to the old body. Sometimes it works well, sometimes it doesn’t. It works better if the original video clips have similar face colors and lighting.

      If you are picky and notice a seam in the new video, you can use After Effects or similar software to combine the new video clip with the old video clip and better mix the new face into the old body. A “mask” is a kind of selection tool used to tell the software which region of the new video to transfer to the other, and then you would “feather” the edge of the mask to blur the borders into the old footage. These are all After Effects terms.

      I mention this, again, because you can’t currently change the blending settings in dfaker. The way it merges faces is a bit more complicated.

      • Thanks for the explanation, I appreciate you getting back to me. Is this masking and feathering done on a frame-by-frame basis, or can After Effects apply this to sections of the video?

        • It is a mix of both. There is an auto face/outline tracking tool (I haven’t tested), which may be able to create the mask automatically across the entire video. However, you also can create masks semi-automatically by manually inserting “keyframes” at intervals. The software will interpolate the mask between keyframes. For example, if the face moves left, and you add a keyframe at the start and stop of motion, the mask will move in between. In practice, you probably use whatever auto-tool there is, then adjust keyframes manually where you need to.

    • If you install CUDA 9, you may have to change the paths between CUDA 8/9 when you run either program.

      On the cloud, you can also shutdown one instance (not delete, just turn it off), then create a new instance and start it. You can have a CUDA 9 and CUDA 8 instance, for example, or any other type of environment you want. You may have to pay a minor amount (like $2 a month) to store the extra instance. If your GPU quota = 1, you can still create multiple GPU instances. You can only activate one at a time.

    • Don’t know anything about using the Google Cloud, but I’ve had Cuda 9 installed side-by-side with Cuda 8 ever since 2.2 came out and almost always ended up needing to use 1.1 for the final conversion because I didn’t get the blur & kernel right on the first try and didn’t want to go through the extraction and aligning again just to try a different setting. I’ve never had any issues.

  2. When you say install Visual Studio 2015 with SDK and C++ packages do you mean:

    A) Visual Studio Community 2015+Modeling SDK+Visual C++ Redistributable
    B) Visual Studio 2015 SDK+Visual C++ Redistributable
    C) Visual Studio 2015 Modeling SDK+ Visual C++ Redistributable

    Also, should I be installing Boost, CMake and Git inside of the Anaconda environment or should I download and install them normally via windows installer…?

    Thanks for any help you can give!

    • The SDK is an option within Visual Studio 2015 – don’t use “express” installation. Use “custom” and pick the options you need, including the SDK. The redistributable is a separate file.

      You can install Boost, CMake, and Git via Windows. You can also build boost yourself, but that is a lot more painful and not recommended.

      • Sorry, I’m still confused.

        There’s like 20 download options for VS 2015 on the official site and as I understand it, the “free” version of Visual Studio 2015 is Visual Studio Community 2015.

        So I downloaded VS Community 2015 but when i go to install it, I DO see options for the windows SDK’s, but I do NOT see any C++ packages.

        Can you please link to EXACTLY what version of VS 2015 I should be using or somehow clarify what packages I need exactly and how I obtain them via VS Community 2015…?

        Thanks again!

  3. Thank you for posting this guide. It’s been really helpful with setting up these scripts. There aren’t many guides out there on dfaker, this being one of the very few (other than the official readme)

    I noticed that on the guide for the dfaker example, there is a typo. For –filetype, it produces an error in dfaker, –file-type works however.

  4. When I look at dfaker’s I see that it is looking for arguments such as:

    –seamlessClone (default False) “attempt to use opencv seamlessClone”
    –doublePass (default False) “pass the original prediction output back through for a second pass”
    –maskType (default FaceHullAndRect) “type of masking to use around the face – options are [FaceHullAndRect, FaceHull, Rect])
    –blurSize (default 4)
    –erosionKernelSize (defaut 2)

    Has anyone played with these?

  5. I’m going to break some eggs and play with some of these options 😉

    After taking a quick look:
    blurSize – might be hard coded to 11
    seamlessClone – looks like it does something, there are additional options and values to play with though
    doublePass – functionality is there, but is forced to False
    maskType – non functional, I think the mask is an ellipse inside of a rectangle
    erosionKernelSize – looks like this works

  6. Hi,

    How updated, as of today 03/20, is your portable version?
    Is there an advantage of manual install compared to your version as of today?

    • The portable version is from the mid-end February. There have been many updates recently, but you can just pull them yourself into the portable install with “git”.

      A manual install is pretty similar to the portable one. If you are trying to run other programs with complex dependencies, you may wish to use a regular conda install instead of winpython… but for most things it should be the same.

  7. After manually installing faceswap according to your guide without problems, I tried to run the extraction but it fails all the time. The ending error message is:
    Instructions for updating:
    keep_dims is deprecated, use keepdims instead
    2018-03-20 08:33:16.596928: E C:\tf_jenkins\workspace\rel-win\M\windows-gpu\PY\36\tensorflow\stream_executor\cuda\] Loaded runtime CuDNN library: 7101 (compatibility version 7100) but source was compiled with 7003 (compatibility version 7000). If using a binary install, upgrade your CuDNN library to match. If building from sources, make sure the library loaded at runtime matches a compatible version specified during compile configuration.
    2018-03-20 08:33:16.608643: F C:\tf_jenkins\workspace\rel-win\M\windows-gpu\PY\36\tensorflow\core\kernels\] Check failed: stream->parent()->GetConvolveAlgorithms( conv_parameters.ShouldIncludeWinogradNonfusedAlgo(), &algorithms)

    It seems from this message hat my CUDNN version did not match. However, I was able to run your portable version using the provided Powershell in your portable version.

  8. Hi, first of all thanks for guide. I follow your guide and decide to use the portable version.
    I install Cuda 9.0 and patches, cudNN 7 (i check the path in control sysdm.cpl) and vc_redist.x64 it’s alreday installed.
    When i try to run commands i got this errors (here an example):

    “F:\pythondf\python-3.6.3.amd64\df>python F:/fakes/data_A –file-type png
    Traceback (most recent call last):
    File “”, line 3, in
    import dlib
    ImportError: DLL load failed:”

    I always got this “import dlib” error. I miss to install something?

    Thanks for help

    • Strange… finding that it works for some people and not others. I haven’t been able to figure out why it fails for some.

      Do you have cuDNN 7, not 7.1? Were you able to run the packaged faceswap successfully, or what errors do you get for faceswap?

      • Hi, thanks for answer. You are right, i have cuDNN 7.1 but now i have cuDNN 7.0.5 and got the same error. I’ll try to run faceswap commands and got this:

        “Traceback (most recent call last):
        File “”, line 10, in
        from scripts.extract import ExtractTrainingData
        File “F:\pythondf\python-3.6.3.amd64\faceswap\scripts\”, line 7, in
        from lib.cli import DirectoryProcessor
        File “F:\pythondf\python-3.6.3.amd64\faceswap\lib\”, line 6, in
        from lib.FaceFilter import FaceFilter
        File “F:\pythondf\python-3.6.3.amd64\faceswap\lib\”, line 3, in
        import face_recognition
        File “F:\pythondf\python-3.6.3.amd64\lib\site-packages\face_recognition\”, line 7, in
        from .api import load_image_file, face_locations, batch_face_locations, face_landmarks, face_encodings, compare_faces, face_distance
        File “F:\pythondf\python-3.6.3.amd64\lib\site-packages\face_recognition\”, line 4, in
        import dlib
        ImportError: DLL load failed: ”

        Thanks for help.

        • To check: Windows 10? Intel processor?
          VC_redist installed?

          Any other python versions installed on the same computer?

          By the way, you can post in the forum too if the comment chain gets too long. More people will see it and possibly chime in.

  9. What method is most effective to making the swap look more realistic (ie better blending around the edges of the face swaps) just letting it run longer in the training portion or do I need to run it through a separate program?

    • You have to adjust the blur, etc. settings during the merge step.

      For the highest quality (but most time-consuming), you should do the swap with no blurring and convert the largest possible face. Then, manually blur the edges using a third party program like After Effects.

      Training longer only helps to a certain point. You can also try to pre-match the colors for the input and output by adjusting the hue, etc. with something like batch GIMP.

  10. The proper building command should be:
    python install -G “Visual Studio 14 2015” –yes USE_AVX_INSTRUCTIONS –yes DLIB_USE_CUDA

    It will work properly even if you have multiply MSVC versions installed.

Leave a Comment