How to start using the python faceswap scripts in Windows
The open source FakeApp alternatives for creating deepfakes provide faster training speeds, high quality faceswaps, and other features, including better security. If you have read my benchmarks post or kept track of the latest open source developments, switching to the python scripts makes perfect sense. This post will give a quick overview of how to get started with the python scripts.
This guide assumes you are using a Windows 10 operating system, Intel CPU, and modern NVIDIA graphics card. You should also have already installed:
- CUDA 9.0, not 9.1 (Available here)
- cudNN 7, not 7.1 (You have to register here to download this)
- Latest graphics card drivers (Available here)
- vc_redist.x64 2015 (Google this)
Note that there are a few differences from the original tutorial. I now recommend you install CUDA 9 instead of CUDA 8. You should NOT install CUDA 9.1, as NVIDIA does not recommend it for Tensorflow 1.5. To install cudNN, follow the instructions here, and do NOT install 7.1 or higher. After extracting the cuDNN files, you will have to copy a 3 images to 3 separate CUDA directories. Make sure that your paths are set correctly. You may wish to uninstall CUDA 8 first if you still have it on your system.
You will also need to update your graphics card drivers, as older drivers do not support CUDA 9.
Python and scripts installation
You can either install python and the scripts the easy way or the hard way.
Portable python installation
For the way, simply download the portable WinPython package I created from this link. Extract the zip files and you should see a directory called “pythondf”. Within the folder pythondf\python-3.6.3.amd64, you will see 3 directories labeled “df”, “faceswap”, and “faceswap_lowmem”. These are the root directories of github repositories. You’re all set with installation. Skip ahead.
Manual Anaconda installation
Install the latest version of Anaconda that supports python 3.6 from here.
Now run the Anaconda command prompt. This is different from the general Windows command prompt, and you can find it by type “prompt” in the Windows search bar.
Create a virtual environment from within conda. You can name it “myenv” or something else.
conda create -n myenv python=3.6 numpy pyyaml mkl conda install -c peterjc123 pytorch cuda90
This install pytorch, which you need for the dfaker repo. If you only want to install faceswap, you can skip that last command (but you should still create a virtual environment).
Now, you will need to install the following to compile dlib with GPU support:
- Microsoft Visual Studio 2015 with SDK and C++ packages (Do NOT use VS 2017)
- Boost 1.66 (Click the link to prebuilt Windows binaries or go directly here and grab the latest version, currently boost_1_66_0-msvc-14.0-64.exe.)
- GitHub for Windows (desktop app also installs command line version)
Make sure that your environment is active, if it isn’t already.
You may wish to create a new directory. Once you are in your desired project directory, clone the dlib library.
git clone https://github.com/davisking/dlib
You will download new directory named “dlib”. Enter the directory and compile as follows.
cd dlib python setup.py install --yes USE_AVX_INSTRUCTIONS --yes DLIB_USE_CUDA
You can now install the rest of the required packages.
pip install pathlib==1.0.1 pip install scandir==1.6 pip install h5py==2.7.1 pip install Keras==2.1.2 pip install opencv-python==220.127.116.11 pip install tensorflow-gpu==1.5.0 pip install scikit-image pip install tqdm
It is not required in the latest commit of faceswap, but if you like, you can install:
pip install face_recognition
Return to your project’s root directory.
git clone https://github.com/deepfakes/faceswap
This creates a directory named “faceswap”.
If you would like to install the dfaker repo you need to do the following.
git clone https://github.com/dfaker/df cd df git clone https://github.com/keras-team/keras-contrib.git cd keras-contrib python setup.py install cd .. git clone https://github.com/1adrianb/face-alignment.git cd face-alignment python setup.py install
If you would like to activate GPU-based face extraction, in the file align_images_masked.py, comment out the line below by inserting a pound symbol:
#dlib.cnn_face_detection_model_v1 = monkey_patch_face_detector
You are now done installing the python scripts manually.
Here are a few additional tips for beginners:
- Remember that you must always start the Anaconda command prompt, and then activate your virtual environment.
- The paths in the usage guide follow Linux conventions. For simplicity, just type out the full path each time you need a path and you will be set (i.e. C:\myproj\photos\A).
- Execute the commands from within the “faceswap” directory that contains the file “faceswap.py”.
Unlike faceswap, dfaker is mostly developed by one person, so it isn’t as polished for general use. You will have to change the code manually to adjust options, for example.
To extract faces, type the command from the df directory while using the Anaconda prompt and activating your virtual environment:
python align_images_masked.py image_directory --file-type png
where image_directory is the path containing your images to extract. The default filetype is jpg, so you can leave out the last part if you are extracting from jpg files.
This will create a new folder named “aligned” and file named “alignments.json” within your original image path. If this is for face A, you need to manually copy the aligned images AND the alignments.json file into a folder located at df\data\A. For example, if your installed the df repo in D:\, you need to copy the aligned images and .json file to D:\df\data\A.
Repeat on a second directory containing face B, and again manually copy your files to the location df\data\B.
To train your model, simply enter the command below from the df directory:
If you need to change the batch size, you will have to manually edit your train.py file where it says:
batch_size = int(32)
When done, run the command:
on the face A directory that you want to convert into face B. There are no mask or blurring options, as the settings are hardcoded into a more complicated algorithm.
Create the final video as usual with ffmpeg.
If you need to adjust the conversion settings, I would suggest using After Effects or another video editor to manually merge the original and converted footage together. Because you obtain a much larger face area, you have more room to mask the face in other software. Plus, if you spend a long time training your model, it’s worth spending the extra hour or so to merge the video properly.
Hopefully, this is enough to more people started on the faceswap and dfaker scripts. I’ll update this guide as I have more time in the future.