* Modernize CI

* Modernize CI

* Modernize CI

* Implement dynamic config (#518)

* Implement dynamic config

* Fix apply config

* Move config to general

* Move config to general

* Move config to general

* Add Windows installer

* Add --open-browser

* Add Windows installer part2

* Use non-commercial license for the installer

* Fix create environment in installer

* Fix openvino for installer

* Fix conda for installer

* Fix conda for installer, Remove python and pip as it is part of conda

* Improve installer - guess the path

* Fix CI

* Add missing accept-source-agreements to installer

* Install WinGet

* Improve WinGet installation steps

* Use absolute path for winget

* More installer polishing

* Add final page to installer, disable version check for Gradio

* Remove finish page again

* Use NEXT for metadata

* Support for /S mode

* Use winget-less approach

* Improve Conda uninstall

* Improve code using platform helpers (#529)

* Update dependencies

* Feat/fix windows unicode paths (#531)

* Fix the Windows unicode path dilemma

* Update dependencies

* Fix the Windows unicode path dilemma part2

* Remove conda environment on uninstall

* Fix uninstall command

* Install apps for local user only

* Add ultra sharp

* Add clear reality

* Update README and FUNDING

* Update FUNDING.yml

* Prevent preview of large videos in Gradio (#540)

* Fix order

* Refactor temporary file management, Use temporary file for image processing (#542)

* Allow webm on target component

* Reduce mosaic effect for frame processors

* clear static faces on trim frame changes

* Fix trim frame component

* Downgrade openvino dependency

* Prepare next release

* Move get_short_path to filesystem, Add/Improve some testing

* Prepare installer, Prevent infinite loop for sanitize_path_for_windows

* Introduce execution device id

* Introduce execution device id

* Seems like device id can be a string

* Seems like device id can be a string

* Make Intel Arc work with OpenVINOExecution

* Use latest Git

* Update wording

* Fix create_float_range

* Update preview

* Fix Git link
This commit is contained in:
Henry Ruhs 2024-05-19 15:22:03 +02:00 committed by GitHub
parent 6ff35965a7
commit 319e3f9652
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
46 changed files with 555 additions and 213 deletions

BIN
.github/preview.png vendored

Binary file not shown.

Before

Width:  |  Height:  |  Size: 1.2 MiB

After

Width:  |  Height:  |  Size: 1.2 MiB

View File

@ -7,9 +7,9 @@ jobs:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v2
uses: actions/checkout@v4
- name: Set up Python 3.10
uses: actions/setup-python@v2
uses: actions/setup-python@v5
with:
python-version: '3.10'
- run: pip install flake8
@ -23,11 +23,11 @@ jobs:
runs-on: ${{ matrix.os }}
steps:
- name: Checkout
uses: actions/checkout@v2
- name: Set up ffmpeg
uses: FedericoCarboni/setup-ffmpeg@v2
uses: actions/checkout@v4
- name: Set up FFMpeg
uses: FedericoCarboni/setup-ffmpeg@v3
- name: Set up Python 3.10
uses: actions/setup-python@v2
uses: actions/setup-python@v5
with:
python-version: '3.10'
- run: python install.py --onnxruntime default --skip-conda

3
.install/LICENSE.md Normal file
View File

@ -0,0 +1,3 @@
CC-BY-4.0 license
Copyright (c) 2024 Henry Ruhs

BIN
.install/facefusion.ico Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 17 KiB

177
.install/facefusion.nsi Normal file
View File

@ -0,0 +1,177 @@
!include MUI2.nsh
!include nsDialogs.nsh
!include LogicLib.nsh
RequestExecutionLevel admin
Name 'FaceFusion 2.6.0'
OutFile 'FaceFusion_2.6.0.exe'
!define MUI_ICON 'facefusion.ico'
!insertmacro MUI_PAGE_DIRECTORY
Page custom InstallPage PostInstallPage
!insertmacro MUI_PAGE_INSTFILES
!insertmacro MUI_LANGUAGE English
Var UseDefault
Var UseCuda
Var UseDirectMl
Var UseOpenVino
Function .onInit
StrCpy $INSTDIR 'C:\FaceFusion'
FunctionEnd
Function InstallPage
nsDialogs::Create 1018
!insertmacro MUI_HEADER_TEXT 'Choose Your Accelerator' 'Choose your accelerator based on the graphics card.'
${NSD_CreateRadioButton} 0 40u 100% 10u 'Default'
Pop $UseDefault
${NSD_CreateRadioButton} 0 55u 100% 10u 'CUDA (NVIDIA)'
Pop $UseCuda
${NSD_CreateRadioButton} 0 70u 100% 10u 'DirectML (AMD, Intel, NVIDIA)'
Pop $UseDirectMl
${NSD_CreateRadioButton} 0 85u 100% 10u 'OpenVINO (Intel)'
Pop $UseOpenVino
${NSD_Check} $UseDefault
nsDialogs::Show
FunctionEnd
Function PostInstallPage
${NSD_GetState} $UseDefault $UseDefault
${NSD_GetState} $UseCuda $UseCuda
${NSD_GetState} $UseDirectMl $UseDirectMl
${NSD_GetState} $UseOpenVino $UseOpenVino
FunctionEnd
Function Destroy
${If} ${Silent}
Quit
${Else}
Abort
${EndIf}
FunctionEnd
Section 'Prepare Your Platform'
DetailPrint 'Install GIT'
inetc::get 'https://github.com/git-for-windows/git/releases/download/v2.45.1.windows.1/Git-2.45.1-64-bit.exe' '$TEMP\Git.exe'
ExecWait '$TEMP\Git.exe /CURRENTUSER /VERYSILENT /DIR=$LOCALAPPDATA\Programs\Git' $0
Delete '$TEMP\Git.exe'
${If} $0 > 0
DetailPrint 'Git installation aborted with error code $0'
Call Destroy
${EndIf}
DetailPrint 'Uninstall Conda'
ExecWait '$LOCALAPPDATA\Programs\Miniconda3\Uninstall-Miniconda3.exe /S _?=$LOCALAPPDATA\Programs\Miniconda3'
RMDir /r '$LOCALAPPDATA\Programs\Miniconda3'
DetailPrint 'Install Conda'
inetc::get 'https://repo.anaconda.com/miniconda/Miniconda3-py310_24.3.0-0-Windows-x86_64.exe' '$TEMP\Miniconda3.exe'
ExecWait '$TEMP\Miniconda3.exe /InstallationType=JustMe /AddToPath=1 /S /D=$LOCALAPPDATA\Programs\Miniconda3' $1
Delete '$TEMP\Miniconda3.exe'
${If} $1 > 0
DetailPrint 'Conda installation aborted with error code $1'
Call Destroy
${EndIf}
SectionEnd
Section 'Download Your Copy'
SetOutPath $INSTDIR
DetailPrint 'Download Your Copy'
RMDir /r $INSTDIR
nsExec::Exec '$LOCALAPPDATA\Programs\Git\cmd\git.exe clone https://github.com/facefusion/facefusion --branch 2.6.0 .'
SectionEnd
Section 'Setup Your Environment'
DetailPrint 'Setup Your Environment'
nsExec::Exec '$LOCALAPPDATA\Programs\Miniconda3\Scripts\conda.exe init --all'
nsExec::Exec '$LOCALAPPDATA\Programs\Miniconda3\Scripts\conda.exe create --name facefusion python=3.10 --yes'
SectionEnd
Section 'Create Install Batch'
SetOutPath $INSTDIR
FileOpen $0 install-ffmpeg.bat w
FileOpen $1 install-accelerator.bat w
FileOpen $2 install-application.bat w
FileWrite $0 '@echo off && conda activate facefusion && conda install conda-forge::ffmpeg=7.0.0 --yes'
${If} $UseCuda == 1
FileWrite $1 '@echo off && conda activate facefusion && conda install cudatoolkit=11.8 cudnn=8.9.2.26 conda-forge::gputil=1.4.0 conda-forge::zlib-wapi --yes'
FileWrite $2 '@echo off && conda activate facefusion && python install.py --onnxruntime cuda-11.8'
${ElseIf} $UseDirectMl == 1
FileWrite $2 '@echo off && conda activate facefusion && python install.py --onnxruntime directml'
${ElseIf} $UseOpenVino == 1
FileWrite $1 '@echo off && conda activate facefusion && conda install conda-forge::openvino=2023.1.0 --yes'
FileWrite $2 '@echo off && conda activate facefusion && python install.py --onnxruntime openvino'
${Else}
FileWrite $2 '@echo off && conda activate facefusion && python install.py --onnxruntime default'
${EndIf}
FileClose $0
FileClose $1
FileClose $2
SectionEnd
Section 'Install Your FFmpeg'
SetOutPath $INSTDIR
DetailPrint 'Install Your FFmpeg'
nsExec::ExecToLog 'install-ffmpeg.bat'
SectionEnd
Section 'Install Your Accelerator'
SetOutPath $INSTDIR
DetailPrint 'Install Your Accelerator'
nsExec::ExecToLog 'install-accelerator.bat'
SectionEnd
Section 'Install The Application'
SetOutPath $INSTDIR
DetailPrint 'Install The Application'
nsExec::ExecToLog 'install-application.bat'
SectionEnd
Section 'Create Run Batch'
SetOutPath $INSTDIR
FileOpen $0 run.bat w
FileWrite $0 '@echo off && conda activate facefusion && python run.py --open-browser'
FileClose $0
SectionEnd
Section 'Register The Application'
DetailPrint 'Register The Application'
CreateDirectory $SMPROGRAMS\FaceFusion
CreateShortcut $SMPROGRAMS\FaceFusion\FaceFusion.lnk $INSTDIR\run.bat '' $INSTDIR\.install\facefusion.ico
CreateShortcut $DESKTOP\FaceFusion.lnk $INSTDIR\run.bat '' $INSTDIR\.install\facefusion.ico
WriteUninstaller $INSTDIR\Uninstall.exe
WriteRegStr HKLM SOFTWARE\Microsoft\Windows\CurrentVersion\Uninstall\FaceFusion DisplayName 'FaceFusion'
WriteRegStr HKLM SOFTWARE\Microsoft\Windows\CurrentVersion\Uninstall\FaceFusion DisplayVersion '2.6.0'
WriteRegStr HKLM SOFTWARE\Microsoft\Windows\CurrentVersion\Uninstall\FaceFusion Publisher 'Henry Ruhs'
WriteRegStr HKLM SOFTWARE\Microsoft\Windows\CurrentVersion\Uninstall\FaceFusion InstallLocation $INSTDIR
WriteRegStr HKLM SOFTWARE\Microsoft\Windows\CurrentVersion\Uninstall\FaceFusion UninstallString $INSTDIR\uninstall.exe
SectionEnd
Section 'Uninstall'
nsExec::Exec '$LOCALAPPDATA\Programs\Miniconda3\Scripts\conda.exe env remove --name facefusion --yes'
Delete $DESKTOP\FaceFusion.lnk
RMDir /r $SMPROGRAMS\FaceFusion
RMDir /r $INSTDIR
DeleteRegKey HKLM SOFTWARE\Microsoft\Windows\CurrentVersion\Uninstall\FaceFusion
SectionEnd

View File

@ -1,3 +1,3 @@
MIT license
Copyright (c) 2023 Henry Ruhs
Copyright (c) 2024 Henry Ruhs

View File

@ -29,6 +29,7 @@ python run.py [options]
options:
-h, --help show this help message and exit
-c CONFIG_PATH, --config CONFIG_PATH choose the config file to override defaults
-s SOURCE_PATHS, --source SOURCE_PATHS choose single or multiple source images or audios
-t TARGET_PATH, --target TARGET_PATH choose single target image or video
-o OUTPUT_PATH, --output OUTPUT_PATH specify the output file or directory
@ -41,6 +42,7 @@ misc:
--log-level {error,warn,info,debug} adjust the message severity displayed in the terminal
execution:
--execution-device-id EXECUTION_DEVICE_ID specify the device used for processing
--execution-providers EXECUTION_PROVIDERS [EXECUTION_PROVIDERS ...] accelerate the model inference using different providers (choices: cpu, ...)
--execution-thread-count [1-128] specify the amount of parallel threads while processing
--execution-queue-count [1-32] specify the amount of frames each thread is processing
@ -55,18 +57,18 @@ face analyser:
--face-analyser-gender {female,male} filter the detected faces based on their gender
--face-detector-model {many,retinaface,scrfd,yoloface,yunet} choose the model responsible for detecting the face
--face-detector-size FACE_DETECTOR_SIZE specify the size of the frame provided to the face detector
--face-detector-score [0.0-1.0] filter the detected faces base on the confidence score
--face-landmarker-score [0.0-1.0] filter the detected landmarks base on the confidence score
--face-detector-score [0.0-0.95] filter the detected faces base on the confidence score
--face-landmarker-score [0.0-0.95] filter the detected landmarks base on the confidence score
face selector:
--face-selector-mode {many,one,reference} use reference based tracking or simple matching
--reference-face-position REFERENCE_FACE_POSITION specify the position used to create the reference face
--reference-face-distance [0.0-1.5] specify the desired similarity between the reference face and target face
--reference-face-distance [0.0-1.45] specify the desired similarity between the reference face and target face
--reference-frame-number REFERENCE_FRAME_NUMBER specify the frame used to create the reference face
face mask:
--face-mask-types FACE_MASK_TYPES [FACE_MASK_TYPES ...] mix and match different face mask types (choices: box, occlusion, region)
--face-mask-blur [0.0-1.0] specify the degree of blur applied the box mask
--face-mask-blur [0.0-0.95] specify the degree of blur applied the box mask
--face-mask-padding FACE_MASK_PADDING [FACE_MASK_PADDING ...] apply top, right, bottom and left padding to the box mask
--face-mask-regions FACE_MASK_REGIONS [FACE_MASK_REGIONS ...] choose the facial features used for the region mask (choices: skin, left-eyebrow, right-eyebrow, left-eye, right-eye, glasses, nose, mouth, upper-lip, lower-lip)
@ -95,11 +97,12 @@ frame processors:
--frame-colorizer-model {ddcolor,ddcolor_artistic,deoldify,deoldify_artistic,deoldify_stable} choose the model responsible for colorizing the frame
--frame-colorizer-blend [0-100] blend the colorized into the previous frame
--frame-colorizer-size {192x192,256x256,384x384,512x512} specify the size of the frame provided to the frame colorizer
--frame-enhancer-model {lsdir_x4,nomos8k_sc_x4,real_esrgan_x2,real_esrgan_x2_fp16,real_esrgan_x4,real_esrgan_x4_fp16,real_hatgan_x4,span_kendata_x4} choose the model responsible for enhancing the frame
--frame-enhancer-model {clear_reality_x4,lsdir_x4,nomos8k_sc_x4,real_esrgan_x2,real_esrgan_x2_fp16,real_esrgan_x4,real_esrgan_x4_fp16,real_hatgan_x4,span_kendata_x4,ultra_sharp_x4} choose the model responsible for enhancing the frame
--frame-enhancer-blend [0-100] blend the enhanced into the previous frame
--lip-syncer-model {wav2lip_gan} choose the model responsible for syncing the lips
uis:
--open-browser open the browser once the program is ready
--ui-layouts UI_LAYOUTS [UI_LAYOUTS ...] launch a single or multiple UI layouts (choices: benchmark, default, webcam, ...)
```

View File

@ -10,6 +10,7 @@ headless =
log_level =
[execution]
execution_device_id =
execution_providers =
execution_thread_count =
execution_queue_count =
@ -69,4 +70,5 @@ frame_enhancer_blend =
lip_syncer_model =
[uis]
open_browser =
ui_layouts =

View File

@ -1,17 +1,45 @@
from typing import List, Any
import numpy
import platform
def create_metavar(ranges : List[Any]) -> str:
return '[' + str(ranges[0]) + '-' + str(ranges[-1]) + ']'
def create_int_range(start : int, stop : int, step : int) -> List[int]:
return (numpy.arange(start, stop + step, step)).tolist()
def create_int_range(start : int, end : int, step : int) -> List[int]:
int_range = []
current = start
while current <= end:
int_range.append(current)
current += step
return int_range
def create_float_range(start : float, stop : float, step : float) -> List[float]:
return (numpy.around(numpy.arange(start, stop + step, step), decimals = 2)).tolist()
def create_float_range(start : float, end : float, step : float) -> List[float]:
float_range = []
current = start
while current <= end:
float_range.append(round(current, 2))
current = round(current + step, 2)
return float_range
def is_linux() -> bool:
return to_lower_case(platform.system()) == 'linux'
def is_macos() -> bool:
return to_lower_case(platform.system()) == 'darwin'
def is_windows() -> bool:
return to_lower_case(platform.system()) == 'windows'
def to_lower_case(__string__ : Any) -> str:
return str(__string__).lower()
def get_first(__list__ : Any) -> Any:

View File

@ -1,7 +1,7 @@
from configparser import ConfigParser
from typing import Any, Optional, List
from facefusion.filesystem import resolve_relative_path
import facefusion.globals
CONFIG = None
@ -10,9 +10,8 @@ def get_config() -> ConfigParser:
global CONFIG
if CONFIG is None:
config_path = resolve_relative_path('../facefusion.ini')
CONFIG = ConfigParser()
CONFIG.read(config_path, encoding = 'utf-8')
CONFIG.read(facefusion.globals.config_path, encoding = 'utf-8')
return CONFIG

View File

@ -37,7 +37,7 @@ def get_content_analyser() -> Any:
sleep(0.5)
if CONTENT_ANALYSER is None:
model_path = MODELS.get('open_nsfw').get('path')
CONTENT_ANALYSER = onnxruntime.InferenceSession(model_path, providers = apply_execution_provider_options(facefusion.globals.execution_providers))
CONTENT_ANALYSER = onnxruntime.InferenceSession(model_path, providers = apply_execution_provider_options(facefusion.globals.execution_device_id, facefusion.globals.execution_providers))
return CONTENT_ANALYSER

View File

@ -24,7 +24,7 @@ from facefusion.normalizer import normalize_output_path, normalize_padding, norm
from facefusion.memory import limit_system_memory
from facefusion.statistics import conditional_log_statistics
from facefusion.download import conditional_download
from facefusion.filesystem import list_directory, get_temp_frame_paths, create_temp, move_temp, clear_temp, is_image, is_video, filter_audio_paths, resolve_relative_path
from facefusion.filesystem import get_temp_frame_paths, get_temp_file_path, create_temp, move_temp, clear_temp, is_image, is_video, filter_audio_paths, resolve_relative_path, list_directory
from facefusion.ffmpeg import extract_frames, merge_video, copy_image, finalize_image, restore_audio, replace_audio
from facefusion.vision import read_image, read_static_images, detect_image_resolution, restrict_video_fps, create_image_resolutions, get_video_frame, detect_video_resolution, detect_video_fps, restrict_video_resolution, restrict_image_resolution, create_video_resolutions, pack_resolution, unpack_resolution
@ -34,8 +34,10 @@ warnings.filterwarnings('ignore', category = UserWarning, module = 'gradio')
def cli() -> None:
signal.signal(signal.SIGINT, lambda signal_number, frame: destroy())
program = ArgumentParser(formatter_class = lambda prog: HelpFormatter(prog, max_help_position = 160), add_help = False)
program = ArgumentParser(formatter_class = lambda prog: HelpFormatter(prog, max_help_position = 200), add_help = False)
# general
program.add_argument('-c', '--config', help = wording.get('help.config'), dest = 'config_path', default = 'facefusion.ini')
apply_config(program)
program.add_argument('-s', '--source', help = wording.get('help.source'), action = 'append', dest = 'source_paths', default = config.get_str_list('general.source_paths'))
program.add_argument('-t', '--target', help = wording.get('help.target'), dest = 'target_path', default = config.get_str_value('general.target_path'))
program.add_argument('-o', '--output', help = wording.get('help.output'), dest = 'output_path', default = config.get_str_value('general.output_path'))
@ -49,6 +51,7 @@ def cli() -> None:
# execution
execution_providers = encode_execution_providers(onnxruntime.get_available_providers())
group_execution = program.add_argument_group('execution')
group_execution.add_argument('--execution-device-id', help = wording.get('help.execution_device_id'), default = config.get_str_value('execution.face_detector_size', '0'))
group_execution.add_argument('--execution-providers', help = wording.get('help.execution_providers').format(choices = ', '.join(execution_providers)), default = config.get_str_list('execution.execution_providers', 'cpu'), choices = execution_providers, nargs = '+', metavar = 'EXECUTION_PROVIDERS')
group_execution.add_argument('--execution-thread-count', help = wording.get('help.execution_thread_count'), type = int, default = config.get_int_value('execution.execution_thread_count', '4'), choices = facefusion.choices.execution_thread_count_range, metavar = create_metavar(facefusion.choices.execution_thread_count_range))
group_execution.add_argument('--execution-queue-count', help = wording.get('help.execution_queue_count'), type = int, default = config.get_int_value('execution.execution_queue_count', '1'), choices = facefusion.choices.execution_queue_count_range, metavar = create_metavar(facefusion.choices.execution_queue_count_range))
@ -104,10 +107,16 @@ def cli() -> None:
# uis
available_ui_layouts = list_directory('facefusion/uis/layouts')
group_uis = program.add_argument_group('uis')
group_uis.add_argument('--open-browser', help=wording.get('help.open_browser'), action = 'store_true', default = config.get_bool_value('uis.open_browser'))
group_uis.add_argument('--ui-layouts', help = wording.get('help.ui_layouts').format(choices = ', '.join(available_ui_layouts)), default = config.get_str_list('uis.ui_layouts', 'default'), nargs = '+')
run(program)
def apply_config(program : ArgumentParser) -> None:
known_args = program.parse_known_args()
facefusion.globals.config_path = get_first(known_args).config_path
def validate_args(program : ArgumentParser) -> None:
try:
for action in program._actions:
@ -133,6 +142,7 @@ def apply_args(program : ArgumentParser) -> None:
facefusion.globals.headless = args.headless
facefusion.globals.log_level = args.log_level
# execution
facefusion.globals.execution_device_id = args.execution_device_id
facefusion.globals.execution_providers = decode_execution_providers(args.execution_providers)
facefusion.globals.execution_thread_count = args.execution_thread_count
facefusion.globals.execution_queue_count = args.execution_queue_count
@ -194,6 +204,7 @@ def apply_args(program : ArgumentParser) -> None:
frame_processor_module = load_frame_processor_module(frame_processor)
frame_processor_module.apply_args(program)
# uis
facefusion.globals.open_browser = args.open_browser
facefusion.globals.ui_layouts = args.ui_layouts
@ -299,28 +310,38 @@ def process_image(start_time : float) -> None:
normed_output_path = normalize_output_path(facefusion.globals.target_path, facefusion.globals.output_path)
if analyse_image(facefusion.globals.target_path):
return
# clear temp
logger.debug(wording.get('clearing_temp'), __name__.upper())
clear_temp(facefusion.globals.target_path)
# create temp
logger.debug(wording.get('creating_temp'), __name__.upper())
create_temp(facefusion.globals.target_path)
# copy image
process_manager.start()
temp_image_resolution = pack_resolution(restrict_image_resolution(facefusion.globals.target_path, unpack_resolution(facefusion.globals.output_image_resolution)))
logger.info(wording.get('copying_image').format(resolution = temp_image_resolution), __name__.upper())
if copy_image(facefusion.globals.target_path, normed_output_path, temp_image_resolution):
if copy_image(facefusion.globals.target_path, temp_image_resolution):
logger.debug(wording.get('copying_image_succeed'), __name__.upper())
else:
logger.error(wording.get('copying_image_failed'), __name__.upper())
return
# process image
temp_file_path = get_temp_file_path(facefusion.globals.target_path)
for frame_processor_module in get_frame_processors_modules(facefusion.globals.frame_processors):
logger.info(wording.get('processing'), frame_processor_module.NAME)
frame_processor_module.process_image(facefusion.globals.source_paths, normed_output_path, normed_output_path)
frame_processor_module.process_image(facefusion.globals.source_paths, temp_file_path, temp_file_path)
frame_processor_module.post_process()
if is_process_stopping():
return
# finalize image
logger.info(wording.get('finalizing_image').format(resolution = facefusion.globals.output_image_resolution), __name__.upper())
if finalize_image(normed_output_path, facefusion.globals.output_image_resolution):
if finalize_image(facefusion.globals.target_path, normed_output_path, facefusion.globals.output_image_resolution):
logger.debug(wording.get('finalizing_image_succeed'), __name__.upper())
else:
logger.warn(wording.get('finalizing_image_skipped'), __name__.upper())
# clear temp
logger.debug(wording.get('clearing_temp'), __name__.upper())
clear_temp(facefusion.globals.target_path)
# validate image
if is_image(normed_output_path):
seconds = '{:.2f}'.format((time() - start_time) % 60)

View File

@ -1,6 +1,5 @@
import os
import subprocess
import platform
import ssl
import urllib.request
from typing import List
@ -9,16 +8,17 @@ from tqdm import tqdm
import facefusion.globals
from facefusion import wording
from facefusion.filesystem import is_file
from facefusion.common_helper import is_macos
from facefusion.filesystem import get_file_size, is_file
if platform.system().lower() == 'darwin':
if is_macos():
ssl._create_default_https_context = ssl._create_unverified_context
def conditional_download(download_directory_path : str, urls : List[str]) -> None:
for url in urls:
download_file_path = os.path.join(download_directory_path, os.path.basename(url))
initial_size = os.path.getsize(download_file_path) if is_file(download_file_path) else 0
initial_size = get_file_size(download_file_path)
download_size = get_download_size(url)
if initial_size < download_size:
with tqdm(total = download_size, initial = initial_size, desc = wording.get('downloading'), unit = 'B', unit_scale = True, unit_divisor = 1024, ascii = ' =', disable = facefusion.globals.log_level in [ 'warn', 'error' ]) as progress:
@ -26,7 +26,7 @@ def conditional_download(download_directory_path : str, urls : List[str]) -> Non
current_size = initial_size
while current_size < download_size:
if is_file(download_file_path):
current_size = os.path.getsize(download_file_path)
current_size = get_file_size(download_file_path)
progress.update(current_size - progress.n)
if download_size and not is_download_done(url, download_file_path):
os.remove(download_file_path)
@ -44,5 +44,5 @@ def get_download_size(url : str) -> int:
def is_download_done(url : str, file_path : str) -> bool:
if is_file(file_path):
return get_download_size(url) == os.path.getsize(file_path)
return get_download_size(url) == get_file_size(file_path)
return False

View File

@ -18,15 +18,27 @@ def decode_execution_providers(execution_providers : List[str]) -> List[str]:
return [ execution_provider for execution_provider, encoded_execution_provider in zip(available_execution_providers, encoded_execution_providers) if any(execution_provider in encoded_execution_provider for execution_provider in execution_providers) ]
def apply_execution_provider_options(execution_providers : List[str]) -> List[Any]:
def apply_execution_provider_options(execution_device_id : str, execution_providers : List[str]) -> List[Any]:
execution_providers_with_options : List[Any] = []
for execution_provider in execution_providers:
if execution_provider == 'CUDAExecutionProvider':
execution_providers_with_options.append((execution_provider,
{
'device_id': execution_device_id,
'cudnn_conv_algo_search': 'EXHAUSTIVE' if use_exhaustive() else 'DEFAULT'
}))
elif execution_provider == 'OpenVINOExecutionProvider':
execution_providers_with_options.append((execution_provider,
{
'device_id': execution_device_id,
'device_type': execution_device_id + '_FP32'
}))
elif execution_provider in [ 'DmlExecutionProvider', 'ROCMExecutionProvider' ]:
execution_providers_with_options.append((execution_provider,
{
'device_id': execution_device_id
}))
else:
execution_providers_with_options.append(execution_provider)
return execution_providers_with_options

View File

@ -88,24 +88,24 @@ def get_face_analyser() -> Any:
sleep(0.5)
if FACE_ANALYSER is None:
if facefusion.globals.face_detector_model in [ 'many', 'retinaface' ]:
face_detectors['retinaface'] = onnxruntime.InferenceSession(MODELS.get('face_detector_retinaface').get('path'), providers = apply_execution_provider_options(facefusion.globals.execution_providers))
face_detectors['retinaface'] = onnxruntime.InferenceSession(MODELS.get('face_detector_retinaface').get('path'), providers = apply_execution_provider_options(facefusion.globals.execution_device_id, facefusion.globals.execution_providers))
if facefusion.globals.face_detector_model in [ 'many', 'scrfd' ]:
face_detectors['scrfd'] = onnxruntime.InferenceSession(MODELS.get('face_detector_scrfd').get('path'), providers = apply_execution_provider_options(facefusion.globals.execution_providers))
face_detectors['scrfd'] = onnxruntime.InferenceSession(MODELS.get('face_detector_scrfd').get('path'), providers = apply_execution_provider_options(facefusion.globals.execution_device_id, facefusion.globals.execution_providers))
if facefusion.globals.face_detector_model in [ 'many', 'yoloface' ]:
face_detectors['yoloface'] = onnxruntime.InferenceSession(MODELS.get('face_detector_yoloface').get('path'), providers = apply_execution_provider_options(facefusion.globals.execution_providers))
face_detectors['yoloface'] = onnxruntime.InferenceSession(MODELS.get('face_detector_yoloface').get('path'), providers = apply_execution_provider_options(facefusion.globals.execution_device_id, facefusion.globals.execution_providers))
if facefusion.globals.face_detector_model in [ 'yunet' ]:
face_detectors['yunet'] = cv2.FaceDetectorYN.create(MODELS.get('face_detector_yunet').get('path'), '', (0, 0))
if facefusion.globals.face_recognizer_model == 'arcface_blendswap':
face_recognizer = onnxruntime.InferenceSession(MODELS.get('face_recognizer_arcface_blendswap').get('path'), providers = apply_execution_provider_options(facefusion.globals.execution_providers))
face_recognizer = onnxruntime.InferenceSession(MODELS.get('face_recognizer_arcface_blendswap').get('path'), providers = apply_execution_provider_options(facefusion.globals.execution_device_id, facefusion.globals.execution_providers))
if facefusion.globals.face_recognizer_model == 'arcface_inswapper':
face_recognizer = onnxruntime.InferenceSession(MODELS.get('face_recognizer_arcface_inswapper').get('path'), providers = apply_execution_provider_options(facefusion.globals.execution_providers))
face_recognizer = onnxruntime.InferenceSession(MODELS.get('face_recognizer_arcface_inswapper').get('path'), providers = apply_execution_provider_options(facefusion.globals.execution_device_id, facefusion.globals.execution_providers))
if facefusion.globals.face_recognizer_model == 'arcface_simswap':
face_recognizer = onnxruntime.InferenceSession(MODELS.get('face_recognizer_arcface_simswap').get('path'), providers = apply_execution_provider_options(facefusion.globals.execution_providers))
face_recognizer = onnxruntime.InferenceSession(MODELS.get('face_recognizer_arcface_simswap').get('path'), providers = apply_execution_provider_options(facefusion.globals.execution_device_id, facefusion.globals.execution_providers))
if facefusion.globals.face_recognizer_model == 'arcface_uniface':
face_recognizer = onnxruntime.InferenceSession(MODELS.get('face_recognizer_arcface_uniface').get('path'), providers = apply_execution_provider_options(facefusion.globals.execution_providers))
face_landmarkers['68'] = onnxruntime.InferenceSession(MODELS.get('face_landmarker_68').get('path'), providers = apply_execution_provider_options(facefusion.globals.execution_providers))
face_landmarkers['68_5'] = onnxruntime.InferenceSession(MODELS.get('face_landmarker_68_5').get('path'), providers = apply_execution_provider_options(facefusion.globals.execution_providers))
gender_age = onnxruntime.InferenceSession(MODELS.get('gender_age').get('path'), providers = apply_execution_provider_options(facefusion.globals.execution_providers))
face_recognizer = onnxruntime.InferenceSession(MODELS.get('face_recognizer_arcface_uniface').get('path'), providers = apply_execution_provider_options(facefusion.globals.execution_device_id, facefusion.globals.execution_providers))
face_landmarkers['68'] = onnxruntime.InferenceSession(MODELS.get('face_landmarker_68').get('path'), providers = apply_execution_provider_options(facefusion.globals.execution_device_id, facefusion.globals.execution_providers))
face_landmarkers['68_5'] = onnxruntime.InferenceSession(MODELS.get('face_landmarker_68_5').get('path'), providers = apply_execution_provider_options(facefusion.globals.execution_device_id, facefusion.globals.execution_providers))
gender_age = onnxruntime.InferenceSession(MODELS.get('gender_age').get('path'), providers = apply_execution_provider_options(facefusion.globals.execution_device_id, facefusion.globals.execution_providers))
FACE_ANALYSER =\
{
'face_detectors': face_detectors,

View File

@ -52,7 +52,7 @@ def get_face_occluder() -> Any:
sleep(0.5)
if FACE_OCCLUDER is None:
model_path = MODELS.get('face_occluder').get('path')
FACE_OCCLUDER = onnxruntime.InferenceSession(model_path, providers = apply_execution_provider_options(facefusion.globals.execution_providers))
FACE_OCCLUDER = onnxruntime.InferenceSession(model_path, providers = apply_execution_provider_options(facefusion.globals.execution_device_id, facefusion.globals.execution_providers))
return FACE_OCCLUDER
@ -64,7 +64,7 @@ def get_face_parser() -> Any:
sleep(0.5)
if FACE_PARSER is None:
model_path = MODELS.get('face_parser').get('path')
FACE_PARSER = onnxruntime.InferenceSession(model_path, providers = apply_execution_provider_options(facefusion.globals.execution_providers))
FACE_PARSER = onnxruntime.InferenceSession(model_path, providers = apply_execution_provider_options(facefusion.globals.execution_device_id, facefusion.globals.execution_providers))
return FACE_PARSER

View File

@ -6,7 +6,7 @@ import filetype
import facefusion.globals
from facefusion import logger, process_manager
from facefusion.typing import OutputVideoPreset, Fps, AudioBuffer
from facefusion.filesystem import get_temp_frames_pattern, get_temp_output_video_path
from facefusion.filesystem import get_temp_frames_pattern, get_temp_file_path
from facefusion.vision import restrict_video_fps
@ -60,7 +60,7 @@ def extract_frames(target_path : str, temp_video_resolution : str, temp_video_fp
def merge_video(target_path : str, output_video_resolution : str, output_video_fps : Fps) -> bool:
temp_video_fps = restrict_video_fps(target_path, output_video_fps)
temp_output_video_path = get_temp_output_video_path(target_path)
temp_file_path = get_temp_file_path(target_path)
temp_frames_pattern = get_temp_frames_pattern(target_path, '%04d')
commands = [ '-r', str(temp_video_fps), '-i', temp_frames_pattern, '-s', str(output_video_resolution), '-c:v', facefusion.globals.output_video_encoder ]
@ -76,20 +76,22 @@ def merge_video(target_path : str, output_video_resolution : str, output_video_f
if facefusion.globals.output_video_encoder in [ 'h264_amf', 'hevc_amf' ]:
output_video_compression = round(51 - (facefusion.globals.output_video_quality * 0.51))
commands.extend([ '-qp_i', str(output_video_compression), '-qp_p', str(output_video_compression), '-quality', map_amf_preset(facefusion.globals.output_video_preset) ])
commands.extend([ '-vf', 'framerate=fps=' + str(output_video_fps), '-pix_fmt', 'yuv420p', '-colorspace', 'bt709', '-y', temp_output_video_path ])
commands.extend([ '-vf', 'framerate=fps=' + str(output_video_fps), '-pix_fmt', 'yuv420p', '-colorspace', 'bt709', '-y', temp_file_path ])
return run_ffmpeg(commands)
def copy_image(target_path : str, output_path : str, temp_image_resolution : str) -> bool:
def copy_image(target_path : str, temp_image_resolution : str) -> bool:
temp_file_path = get_temp_file_path(target_path)
is_webp = filetype.guess_mime(target_path) == 'image/webp'
temp_image_compression = 100 if is_webp else 0
commands = [ '-i', target_path, '-s', str(temp_image_resolution), '-q:v', str(temp_image_compression), '-y', output_path ]
commands = [ '-i', target_path, '-s', str(temp_image_resolution), '-q:v', str(temp_image_compression), '-y', temp_file_path ]
return run_ffmpeg(commands)
def finalize_image(output_path : str, output_image_resolution : str) -> bool:
def finalize_image(target_path : str, output_path : str, output_image_resolution : str) -> bool:
temp_file_path = get_temp_file_path(target_path)
output_image_compression = round(31 - (facefusion.globals.output_image_quality * 0.31))
commands = [ '-i', output_path, '-s', str(output_image_resolution), '-q:v', str(output_image_compression), '-y', output_path ]
commands = [ '-i', temp_file_path, '-s', str(output_image_resolution), '-q:v', str(output_image_compression), '-y', output_path ]
return run_ffmpeg(commands)
@ -105,8 +107,8 @@ def read_audio_buffer(target_path : str, sample_rate : int, channel_total : int)
def restore_audio(target_path : str, output_path : str, output_video_fps : Fps) -> bool:
trim_frame_start = facefusion.globals.trim_frame_start
trim_frame_end = facefusion.globals.trim_frame_end
temp_output_video_path = get_temp_output_video_path(target_path)
commands = [ '-i', temp_output_video_path ]
temp_file_path = get_temp_file_path(target_path)
commands = [ '-i', temp_file_path ]
if trim_frame_start is not None:
start_time = trim_frame_start / output_video_fps
@ -119,8 +121,8 @@ def restore_audio(target_path : str, output_path : str, output_video_fps : Fps)
def replace_audio(target_path : str, audio_path : str, output_path : str) -> bool:
temp_output_path = get_temp_output_video_path(target_path)
commands = [ '-i', temp_output_path, '-i', audio_path, '-af', 'apad', '-shortest', '-y', output_path ]
temp_file_path = get_temp_file_path(target_path)
commands = [ '-i', temp_file_path, '-i', audio_path, '-af', 'apad', '-shortest', '-y', output_path ]
return run_ffmpeg(commands)

View File

@ -7,9 +7,10 @@ import filetype
from pathlib import Path
import facefusion.globals
from facefusion.common_helper import is_windows
TEMP_DIRECTORY_PATH = os.path.join(tempfile.gettempdir(), 'facefusion')
TEMP_OUTPUT_VIDEO_NAME = 'temp.mp4'
if is_windows():
import ctypes
def get_temp_frame_paths(target_path : str) -> List[str]:
@ -22,14 +23,16 @@ def get_temp_frames_pattern(target_path : str, temp_frame_prefix : str) -> str:
return os.path.join(temp_directory_path, temp_frame_prefix + '.' + facefusion.globals.temp_frame_format)
def get_temp_file_path(target_path : str) -> str:
_, target_extension = os.path.splitext(os.path.basename(target_path))
temp_directory_path = get_temp_directory_path(target_path)
return os.path.join(temp_directory_path, 'temp' + target_extension)
def get_temp_directory_path(target_path : str) -> str:
target_name, _ = os.path.splitext(os.path.basename(target_path))
return os.path.join(TEMP_DIRECTORY_PATH, target_name)
def get_temp_output_video_path(target_path : str) -> str:
temp_directory_path = get_temp_directory_path(target_path)
return os.path.join(temp_directory_path, TEMP_OUTPUT_VIDEO_NAME)
temp_directory_path = os.path.join(tempfile.gettempdir(), 'facefusion')
return os.path.join(temp_directory_path, target_name)
def create_temp(target_path : str) -> None:
@ -38,22 +41,30 @@ def create_temp(target_path : str) -> None:
def move_temp(target_path : str, output_path : str) -> None:
temp_output_video_path = get_temp_output_video_path(target_path)
if is_file(temp_output_video_path):
temp_file_path = get_temp_file_path(target_path)
if is_file(temp_file_path):
if is_file(output_path):
os.remove(output_path)
shutil.move(temp_output_video_path, output_path)
shutil.move(temp_file_path, output_path)
def clear_temp(target_path : str) -> None:
temp_directory_path = get_temp_directory_path(target_path)
parent_directory_path = os.path.dirname(temp_directory_path)
if not facefusion.globals.keep_temp and is_directory(temp_directory_path):
shutil.rmtree(temp_directory_path, ignore_errors = True)
if os.path.exists(parent_directory_path) and not os.listdir(parent_directory_path):
os.rmdir(parent_directory_path)
def get_file_size(file_path : str) -> int:
if is_file(file_path):
return os.path.getsize(file_path)
return 0
def is_file(file_path : str) -> bool:
return bool(file_path and os.path.isfile(file_path))
@ -105,5 +116,20 @@ def resolve_relative_path(path : str) -> str:
def list_directory(directory_path : str) -> Optional[List[str]]:
if is_directory(directory_path):
files = os.listdir(directory_path)
return sorted([ Path(file).stem for file in files if not Path(file).stem.startswith(('.', '__')) ])
files = [ Path(file).stem for file in files if not Path(file).stem.startswith(('.', '__')) ]
return sorted(files)
return None
def sanitize_path_for_windows(full_path : str) -> Optional[str]:
buffer_size = 0
while True:
unicode_buffer = ctypes.create_unicode_buffer(buffer_size)
buffer_threshold = ctypes.windll.kernel32.GetShortPathNameW(full_path, unicode_buffer, buffer_size) #type:ignore[attr-defined]
if buffer_size > buffer_threshold:
return unicode_buffer.value
if buffer_threshold == 0:
return None
buffer_size = buffer_threshold

View File

@ -3,6 +3,7 @@ from typing import List, Optional
from facefusion.typing import LogLevel, VideoMemoryStrategy, FaceSelectorMode, FaceAnalyserOrder, FaceAnalyserAge, FaceAnalyserGender, FaceMaskType, FaceMaskRegion, OutputVideoEncoder, OutputVideoPreset, FaceDetectorModel, FaceRecognizerModel, TempFrameFormat, Padding
# general
config_path : Optional[str] = None
source_paths : Optional[List[str]] = None
target_path : Optional[str] = None
output_path : Optional[str] = None
@ -12,6 +13,7 @@ skip_download : Optional[bool] = None
headless : Optional[bool] = None
log_level : Optional[LogLevel] = None
# execution
execution_device_id : Optional[str] = None
execution_providers : List[str] = []
execution_thread_count : Optional[int] = None
execution_queue_count : Optional[int] = None
@ -54,4 +56,5 @@ skip_audio : Optional[bool] = None
# frame processors
frame_processors : List[str] = []
# uis
open_browser : Optional[bool] = None
ui_layouts : List[str] = []

View File

@ -1,35 +1,35 @@
from typing import Dict, Tuple
import sys
import os
import platform
import tempfile
import subprocess
import inquirer
from argparse import ArgumentParser, HelpFormatter
from facefusion import metadata, wording
from facefusion.common_helper import is_linux, is_macos, is_windows
if platform.system().lower() == 'darwin':
if is_macos():
os.environ['SYSTEM_VERSION_COMPAT'] = '0'
ONNXRUNTIMES : Dict[str, Tuple[str, str]] = {}
if platform.system().lower() == 'darwin':
ONNXRUNTIMES['default'] = ('onnxruntime', '1.17.1')
if is_macos():
ONNXRUNTIMES['default'] = ('onnxruntime', '1.17.3')
else:
ONNXRUNTIMES['default'] = ('onnxruntime', '1.17.1')
ONNXRUNTIMES['default'] = ('onnxruntime', '1.17.3')
ONNXRUNTIMES['cuda-12.2'] = ('onnxruntime-gpu', '1.17.1')
ONNXRUNTIMES['cuda-11.8'] = ('onnxruntime-gpu', '1.17.1')
ONNXRUNTIMES['openvino'] = ('onnxruntime-openvino', '1.17.1')
if platform.system().lower() == 'linux':
ONNXRUNTIMES['openvino'] = ('onnxruntime-openvino', '1.15.0')
if is_linux():
ONNXRUNTIMES['rocm-5.4.2'] = ('onnxruntime-rocm', '1.16.3')
ONNXRUNTIMES['rocm-5.6'] = ('onnxruntime-rocm', '1.16.3')
if platform.system().lower() == 'windows':
ONNXRUNTIMES['directml'] = ('onnxruntime-directml', '1.17.1')
if is_windows():
ONNXRUNTIMES['directml'] = ('onnxruntime-directml', '1.17.3')
def cli() -> None:
program = ArgumentParser(formatter_class = lambda prog: HelpFormatter(prog, max_help_position = 130))
program = ArgumentParser(formatter_class = lambda prog: HelpFormatter(prog, max_help_position = 200))
program.add_argument('--onnxruntime', help = wording.get('help.install_dependency').format(dependency = 'onnxruntime'), choices = ONNXRUNTIMES.keys())
program.add_argument('--skip-conda', help = wording.get('help.skip_conda'), action = 'store_true')
program.add_argument('-v', '--version', version = metadata.get('name') + ' ' + metadata.get('version'), action = 'version')

View File

@ -1,19 +1,19 @@
import platform
from facefusion.common_helper import is_macos, is_windows
if platform.system().lower() == 'windows':
if is_windows():
import ctypes
else:
import resource
def limit_system_memory(system_memory_limit : int = 1) -> bool:
if platform.system().lower() == 'darwin':
if is_macos():
system_memory_limit = system_memory_limit * (1024 ** 6)
else:
system_memory_limit = system_memory_limit * (1024 ** 3)
try:
if platform.system().lower() == 'windows':
ctypes.windll.kernel32.SetProcessWorkingSetSize(-1, ctypes.c_size_t(system_memory_limit), ctypes.c_size_t(system_memory_limit)) # type: ignore[attr-defined]
if is_windows():
ctypes.windll.kernel32.SetProcessWorkingSetSize(-1, ctypes.c_size_t(system_memory_limit), ctypes.c_size_t(system_memory_limit)) #type:ignore[attr-defined]
else:
resource.setrlimit(resource.RLIMIT_DATA, (system_memory_limit, system_memory_limit))
return True

View File

@ -2,7 +2,7 @@ METADATA =\
{
'name': 'FaceFusion',
'description': 'Next generation face swapper and enhancer',
'version': '2.5.3',
'version': '2.6.0',
'license': 'MIT',
'author': 'Henry Ruhs',
'url': 'https://facefusion.io'

View File

@ -23,21 +23,17 @@ def normalize_output_path(target_path : Optional[str], output_path : Optional[st
def normalize_padding(padding : Optional[List[int]]) -> Optional[Padding]:
if padding and len(padding) == 1:
return tuple([ padding[0], padding[0], padding[0], padding[0] ]) # type: ignore[return-value]
return tuple([ padding[0] ] * 4) #type:ignore[return-value]
if padding and len(padding) == 2:
return tuple([ padding[0], padding[1], padding[0], padding[1] ]) # type: ignore[return-value]
return tuple([ padding[0], padding[1], padding[0], padding[1] ]) #type:ignore[return-value]
if padding and len(padding) == 3:
return tuple([ padding[0], padding[1], padding[2], padding[1] ]) # type: ignore[return-value]
return tuple([ padding[0], padding[1], padding[2], padding[1] ]) #type:ignore[return-value]
if padding and len(padding) == 4:
return tuple(padding) # type: ignore[return-value]
return tuple(padding) #type:ignore[return-value]
return None
def normalize_fps(fps : Optional[float]) -> Optional[Fps]:
if fps is not None:
if fps < 1.0:
return 1.0
if fps > 60.0:
return 60.0
return fps
return max(1.0, min(fps, 60.0))
return None

View File

@ -8,7 +8,7 @@ face_enhancer_models : List[FaceEnhancerModel] = [ 'codeformer', 'gfpgan_1.2', '
face_swapper_models : List[FaceSwapperModel] = [ 'blendswap_256', 'inswapper_128', 'inswapper_128_fp16', 'simswap_256', 'simswap_512_unofficial', 'uniface_256' ]
frame_colorizer_models : List[FrameColorizerModel] = [ 'ddcolor', 'ddcolor_artistic', 'deoldify', 'deoldify_artistic', 'deoldify_stable' ]
frame_colorizer_sizes : List[str] = [ '192x192', '256x256', '384x384', '512x512' ]
frame_enhancer_models : List[FrameEnhancerModel] = [ 'lsdir_x4', 'nomos8k_sc_x4', 'real_esrgan_x2', 'real_esrgan_x2_fp16', 'real_esrgan_x4', 'real_esrgan_x4_fp16', 'real_hatgan_x4', 'span_kendata_x4' ]
frame_enhancer_models : List[FrameEnhancerModel] = [ 'clear_reality_x4', 'lsdir_x4', 'nomos8k_sc_x4', 'real_esrgan_x2', 'real_esrgan_x2_fp16', 'real_esrgan_x4', 'real_esrgan_x4_fp16', 'real_hatgan_x4', 'span_kendata_x4', 'ultra_sharp_x4' ]
lip_syncer_models : List[LipSyncerModel] = [ 'wav2lip_gan' ]
face_enhancer_blend_range : List[int] = create_int_range(0, 100, 1)

View File

@ -104,7 +104,7 @@ def get_frame_processor() -> Any:
sleep(0.5)
if FRAME_PROCESSOR is None:
model_path = get_options('model').get('path')
FRAME_PROCESSOR = onnxruntime.InferenceSession(model_path, providers = apply_execution_provider_options(facefusion.globals.execution_providers))
FRAME_PROCESSOR = onnxruntime.InferenceSession(model_path, providers = apply_execution_provider_options(facefusion.globals.execution_device_id, facefusion.globals.execution_providers))
return FRAME_PROCESSOR

View File

@ -1,7 +1,6 @@
from typing import Any, List, Literal, Optional
from argparse import ArgumentParser
from time import sleep
import platform
import numpy
import onnx
import onnxruntime
@ -10,6 +9,7 @@ from onnx import numpy_helper
import facefusion.globals
import facefusion.processors.frame.core as frame_processors
from facefusion import config, process_manager, logger, wording
from facefusion.common_helper import is_macos
from facefusion.execution import apply_execution_provider_options
from facefusion.face_analyser import get_one_face, get_average_face, get_many_faces, find_similar_faces, clear_face_analyser
from facefusion.face_masker import create_static_box_mask, create_occlusion_mask, create_region_mask, clear_face_occluder, clear_face_parser
@ -103,7 +103,7 @@ def get_frame_processor() -> Any:
sleep(0.5)
if FRAME_PROCESSOR is None:
model_path = get_options('model').get('path')
FRAME_PROCESSOR = onnxruntime.InferenceSession(model_path, providers = apply_execution_provider_options(facefusion.globals.execution_providers))
FRAME_PROCESSOR = onnxruntime.InferenceSession(model_path, providers = apply_execution_provider_options(facefusion.globals.execution_device_id, facefusion.globals.execution_providers))
return FRAME_PROCESSOR
@ -150,7 +150,7 @@ def set_options(key : Literal['model'], value : Any) -> None:
def register_args(program : ArgumentParser) -> None:
if platform.system().lower() == 'darwin':
if is_macos():
face_swapper_model_fallback = 'inswapper_128'
else:
face_swapper_model_fallback = 'inswapper_128_fp16'

View File

@ -68,7 +68,7 @@ def get_frame_processor() -> Any:
sleep(0.5)
if FRAME_PROCESSOR is None:
model_path = get_options('model').get('path')
FRAME_PROCESSOR = onnxruntime.InferenceSession(model_path, providers = apply_execution_provider_options(facefusion.globals.execution_providers))
FRAME_PROCESSOR = onnxruntime.InferenceSession(model_path, providers = apply_execution_provider_options(facefusion.globals.execution_device_id, facefusion.globals.execution_providers))
return FRAME_PROCESSOR

View File

@ -26,60 +26,74 @@ FRAME_PROCESSOR = None
NAME = __name__.upper()
MODELS : ModelSet =\
{
'clear_reality_x4':
{
'url': 'https://github.com/facefusion/facefusion-assets/releases/download/models/clear_reality_x4.onnx',
'path': resolve_relative_path('../.assets/models/clear_reality_x4.onnx'),
'size': (128, 8, 4),
'scale': 4
},
'lsdir_x4':
{
'url': 'https://github.com/facefusion/facefusion-assets/releases/download/models/lsdir_x4.onnx',
'path': resolve_relative_path('../.assets/models/lsdir_x4.onnx'),
'size': (128, 8, 2),
'size': (128, 8, 4),
'scale': 4
},
'nomos8k_sc_x4':
{
'url': 'https://github.com/facefusion/facefusion-assets/releases/download/models/nomos8k_sc_x4.onnx',
'path': resolve_relative_path('../.assets/models/nomos8k_sc_x4.onnx'),
'size': (128, 8, 2),
'size': (128, 8, 4),
'scale': 4
},
'real_esrgan_x2':
{
'url': 'https://github.com/facefusion/facefusion-assets/releases/download/models/real_esrgan_x2.onnx',
'path': resolve_relative_path('../.assets/models/real_esrgan_x2.onnx'),
'size': (128, 8, 2),
'size': (256, 16, 8),
'scale': 2
},
'real_esrgan_x2_fp16':
{
'url': 'https://github.com/facefusion/facefusion-assets/releases/download/models/real_esrgan_x2_fp16.onnx',
'path': resolve_relative_path('../.assets/models/real_esrgan_x2_fp16.onnx'),
'size': (128, 8, 2),
'size': (256, 16, 8),
'scale': 2
},
'real_esrgan_x4':
{
'url': 'https://github.com/facefusion/facefusion-assets/releases/download/models/real_esrgan_x4.onnx',
'path': resolve_relative_path('../.assets/models/real_esrgan_x4.onnx'),
'size': (128, 8, 2),
'size': (256, 16, 8),
'scale': 4
},
'real_esrgan_x4_fp16':
{
'url': 'https://github.com/facefusion/facefusion-assets/releases/download/models/real_esrgan_x4_fp16.onnx',
'path': resolve_relative_path('../.assets/models/real_esrgan_x4_fp16.onnx'),
'size': (128, 8, 2),
'size': (256, 16, 8),
'scale': 4
},
'real_hatgan_x4':
{
'url': 'https://github.com/facefusion/facefusion-assets/releases/download/models/real_hatgan_x4.onnx',
'path': resolve_relative_path('../.assets/models/real_hatgan_x4.onnx'),
'size': (256, 8, 2),
'size': (256, 16, 8),
'scale': 4
},
'span_kendata_x4':
{
'url': 'https://github.com/facefusion/facefusion-assets/releases/download/models/span_kendata_x4.onnx',
'path': resolve_relative_path('../.assets/models/span_kendata_x4.onnx'),
'size': (128, 8, 2),
'size': (128, 8, 4),
'scale': 4
},
'ultra_sharp_x4':
{
'url': 'https://github.com/facefusion/facefusion-assets/releases/download/models/ultra_sharp_x4.onnx',
'path': resolve_relative_path('../.assets/models/ultra_sharp_x4.onnx'),
'size': (128, 8, 4),
'scale': 4
}
}
@ -94,7 +108,7 @@ def get_frame_processor() -> Any:
sleep(0.5)
if FRAME_PROCESSOR is None:
model_path = get_options('model').get('path')
FRAME_PROCESSOR = onnxruntime.InferenceSession(model_path, providers = apply_execution_provider_options(facefusion.globals.execution_providers))
FRAME_PROCESSOR = onnxruntime.InferenceSession(model_path, providers = apply_execution_provider_options(facefusion.globals.execution_device_id, facefusion.globals.execution_providers))
return FRAME_PROCESSOR

View File

@ -49,7 +49,7 @@ def get_frame_processor() -> Any:
sleep(0.5)
if FRAME_PROCESSOR is None:
model_path = get_options('model').get('path')
FRAME_PROCESSOR = onnxruntime.InferenceSession(model_path, providers = apply_execution_provider_options(facefusion.globals.execution_providers))
FRAME_PROCESSOR = onnxruntime.InferenceSession(model_path, providers = apply_execution_provider_options(facefusion.globals.execution_device_id, facefusion.globals.execution_providers))
return FRAME_PROCESSOR

View File

@ -6,7 +6,7 @@ FaceDebuggerItem = Literal['bounding-box', 'face-landmark-5', 'face-landmark-5/6
FaceEnhancerModel = Literal['codeformer', 'gfpgan_1.2', 'gfpgan_1.3', 'gfpgan_1.4', 'gpen_bfr_256', 'gpen_bfr_512', 'gpen_bfr_1024', 'gpen_bfr_2048', 'restoreformer_plus_plus']
FaceSwapperModel = Literal['blendswap_256', 'inswapper_128', 'inswapper_128_fp16', 'simswap_256', 'simswap_512_unofficial', 'uniface_256']
FrameColorizerModel = Literal['ddcolor', 'ddcolor_artistic', 'deoldify', 'deoldify_artistic', 'deoldify_stable']
FrameEnhancerModel = Literal['lsdir_x4', 'nomos8k_sc_x4', 'real_esrgan_x2', 'real_esrgan_x2_fp16', 'real_esrgan_x4', 'real_esrgan_x4_fp16', 'real_hatgan_x4', 'span_kendata_x4']
FrameEnhancerModel = Literal['clear_reality_x4', 'lsdir_x4', 'nomos8k_sc_x4', 'real_esrgan_x2', 'real_esrgan_x2_fp16', 'real_esrgan_x4', 'real_esrgan_x4_fp16', 'real_hatgan_x4', 'span_kendata_x4', 'ultra_sharp_x4']
LipSyncerModel = Literal['wav2lip_gan']
FaceDebuggerInputs = TypedDict('FaceDebuggerInputs',

View File

@ -7,10 +7,10 @@ FaceLandmark5 = numpy.ndarray[Any, Any]
FaceLandmark68 = numpy.ndarray[Any, Any]
FaceLandmarkSet = TypedDict('FaceLandmarkSet',
{
'5' : FaceLandmark5, # type: ignore[valid-type]
'5/68' : FaceLandmark5, # type: ignore[valid-type]
'68' : FaceLandmark68, # type: ignore[valid-type]
'68/5' : FaceLandmark68 # type: ignore[valid-type]
'5' : FaceLandmark5, #type:ignore[valid-type]
'5/68' : FaceLandmark5, #type:ignore[valid-type]
'68' : FaceLandmark68, #type:ignore[valid-type]
'68/5' : FaceLandmark68 #type:ignore[valid-type]
})
Score = float
FaceScoreSet = TypedDict('FaceScoreSet',

View File

@ -5,8 +5,11 @@ import facefusion.globals
from facefusion import wording
from facefusion.face_store import clear_static_faces, clear_reference_faces
from facefusion.uis.typing import File
from facefusion.filesystem import is_image, is_video
from facefusion.filesystem import get_file_size, is_image, is_video
from facefusion.uis.core import register_ui_component
from facefusion.vision import get_video_frame, normalize_frame_color
FILE_SIZE_LIMIT = 512 * 1024 * 1024
TARGET_FILE : Optional[gradio.File] = None
TARGET_IMAGE : Optional[gradio.Image] = None
@ -28,20 +31,34 @@ def render() -> None:
'.png',
'.jpg',
'.webp',
'.webm',
'.mp4'
],
value = facefusion.globals.target_path if is_target_image or is_target_video else None
)
TARGET_IMAGE = gradio.Image(
value = TARGET_FILE.value['name'] if is_target_image else None,
visible = is_target_image,
show_label = False
)
TARGET_VIDEO = gradio.Video(
value = TARGET_FILE.value['name'] if is_target_video else None,
visible = is_target_video,
show_label = False
)
target_image_args =\
{
'show_label': False,
'visible': False
}
target_video_args =\
{
'show_label': False,
'visible': False
}
if is_target_image:
target_image_args['value'] = TARGET_FILE.value['name']
target_image_args['visible'] = True
if is_target_video:
if get_file_size(facefusion.globals.target_path) > FILE_SIZE_LIMIT:
preview_vision_frame = normalize_frame_color(get_video_frame(facefusion.globals.target_path))
target_image_args['value'] = preview_vision_frame
target_image_args['visible'] = True
else:
target_video_args['value'] = TARGET_FILE.value['name']
target_video_args['visible'] = True
TARGET_IMAGE = gradio.Image(**target_image_args)
TARGET_VIDEO = gradio.Video(**target_video_args)
register_ui_component('target_image', TARGET_IMAGE)
register_ui_component('target_video', TARGET_VIDEO)
@ -58,6 +75,9 @@ def update(file : File) -> Tuple[gradio.Image, gradio.Video]:
return gradio.Image(value = file.name, visible = True), gradio.Video(value = None, visible = False)
if file and is_video(file.name):
facefusion.globals.target_path = file.name
if get_file_size(file.name) > FILE_SIZE_LIMIT:
preview_vision_frame = normalize_frame_color(get_video_frame(file.name))
return gradio.Image(value = preview_vision_frame, visible = True), gradio.Video(value = None, visible = False)
return gradio.Image(value = None, visible = False), gradio.Video(value = file.name, visible = True)
facefusion.globals.target_path = None
return gradio.Image(value = None, visible = False), gradio.Video(value = None, visible = False)

View File

@ -3,9 +3,10 @@ import gradio
import facefusion.globals
from facefusion import wording
from facefusion.face_store import clear_static_faces
from facefusion.vision import count_video_frame_total
from facefusion.filesystem import is_video
from facefusion.uis.core import get_ui_component, register_ui_component
from facefusion.uis.core import get_ui_components, register_ui_component
TRIM_FRAME_START_SLIDER : Optional[gradio.Slider] = None
TRIM_FRAME_END_SLIDER : Optional[gradio.Slider] = None
@ -49,10 +50,13 @@ def render() -> None:
def listen() -> None:
TRIM_FRAME_START_SLIDER.release(update_trim_frame_start, inputs = TRIM_FRAME_START_SLIDER)
TRIM_FRAME_END_SLIDER.release(update_trim_frame_end, inputs = TRIM_FRAME_END_SLIDER)
target_video = get_ui_component('target_video')
if target_video:
for ui_component in get_ui_components(
[
'target_image',
'target_video'
]):
for method in [ 'upload', 'change', 'clear' ]:
getattr(target_video, method)(remote_update, outputs = [ TRIM_FRAME_START_SLIDER, TRIM_FRAME_END_SLIDER ])
getattr(ui_component, method)(remote_update, outputs = [ TRIM_FRAME_START_SLIDER, TRIM_FRAME_END_SLIDER ])
def remote_update() -> Tuple[gradio.Slider, gradio.Slider]:
@ -65,9 +69,11 @@ def remote_update() -> Tuple[gradio.Slider, gradio.Slider]:
def update_trim_frame_start(trim_frame_start : int) -> None:
clear_static_faces()
facefusion.globals.trim_frame_start = trim_frame_start if trim_frame_start > 0 else None
def update_trim_frame_end(trim_frame_end : int) -> None:
clear_static_faces()
video_frame_total = count_video_frame_total(facefusion.globals.target_path)
facefusion.globals.trim_frame_end = trim_frame_end if trim_frame_end < video_frame_total else None

View File

@ -1,6 +1,5 @@
from typing import Optional, Generator, Deque
import os
import platform
import subprocess
import cv2
import gradio
@ -12,6 +11,7 @@ from tqdm import tqdm
import facefusion.globals
from facefusion import logger, wording
from facefusion.audio import create_empty_audio_frame
from facefusion.common_helper import is_windows
from facefusion.content_analyser import analyse_stream
from facefusion.filesystem import filter_image_paths
from facefusion.typing import VisionFrame, Face, Fps
@ -32,7 +32,7 @@ def get_webcam_capture() -> Optional[cv2.VideoCapture]:
global WEBCAM_CAPTURE
if WEBCAM_CAPTURE is None:
if platform.system().lower() == 'windows':
if is_windows():
webcam_capture = cv2.VideoCapture(0, cv2.CAP_DSHOW)
else:
webcam_capture = cv2.VideoCapture(0)
@ -98,11 +98,11 @@ def start(webcam_mode : WebcamMode, webcam_resolution : str, webcam_fps : Fps) -
stream = None
if webcam_mode in [ 'udp', 'v4l2' ]:
stream = open_stream(webcam_mode, webcam_resolution, webcam_fps) # type: ignore[arg-type]
stream = open_stream(webcam_mode, webcam_resolution, webcam_fps) #type:ignore[arg-type]
webcam_width, webcam_height = unpack_resolution(webcam_resolution)
webcam_capture = get_webcam_capture()
if webcam_capture and webcam_capture.isOpened():
webcam_capture.set(cv2.CAP_PROP_FOURCC, cv2.VideoWriter_fourcc(*'MJPG')) # type: ignore[attr-defined]
webcam_capture.set(cv2.CAP_PROP_FOURCC, cv2.VideoWriter_fourcc(*'MJPG')) #type:ignore[attr-defined]
webcam_capture.set(cv2.CAP_PROP_FRAME_WIDTH, webcam_width)
webcam_capture.set(cv2.CAP_PROP_FRAME_HEIGHT, webcam_height)
webcam_capture.set(cv2.CAP_PROP_FPS, webcam_fps)

View File

@ -64,4 +64,4 @@ def listen() -> None:
def run(ui : gradio.Blocks) -> None:
concurrency_count = min(2, multiprocessing.cpu_count())
ui.queue(concurrency_count = concurrency_count).launch(show_api = False, quiet = True)
ui.queue(concurrency_count = concurrency_count).launch(show_api = False, quiet = True, inbrowser = facefusion.globals.open_browser)

View File

@ -1,6 +1,7 @@
import multiprocessing
import gradio
import facefusion.globals
from facefusion.uis.components import about, frame_processors, frame_processors_options, execution, execution_thread_count, execution_queue_count, memory, temp_frame, output_options, common_options, source, target, output, preview, trim_frame, face_analyser, face_selector, face_masker
@ -77,4 +78,4 @@ def listen() -> None:
def run(ui : gradio.Blocks) -> None:
concurrency_count = min(8, multiprocessing.cpu_count())
ui.queue(concurrency_count = concurrency_count).launch(show_api = False, quiet = True)
ui.queue(concurrency_count = concurrency_count).launch(show_api = False, quiet = True, inbrowser = facefusion.globals.open_browser)

View File

@ -1,6 +1,7 @@
import multiprocessing
import gradio
import facefusion.globals
from facefusion.uis.components import about, frame_processors, frame_processors_options, execution, execution_thread_count, webcam_options, source, webcam
@ -46,4 +47,4 @@ def listen() -> None:
def run(ui : gradio.Blocks) -> None:
concurrency_count = min(2, multiprocessing.cpu_count())
ui.queue(concurrency_count = concurrency_count).launch(show_api = False, quiet = True)
ui.queue(concurrency_count = concurrency_count).launch(show_api = False, quiet = True, inbrowser = facefusion.globals.open_browser)

View File

@ -4,9 +4,10 @@ import cv2
import numpy
from cv2.typing import Size
from facefusion.common_helper import is_windows
from facefusion.typing import VisionFrame, Resolution, Fps
from facefusion.choices import image_template_sizes, video_template_sizes
from facefusion.filesystem import is_image, is_video
from facefusion.filesystem import is_image, is_video, sanitize_path_for_windows
@lru_cache(maxsize = 128)
@ -24,12 +25,16 @@ def read_static_images(image_paths : List[str]) -> Optional[List[VisionFrame]]:
def read_image(image_path : str) -> Optional[VisionFrame]:
if is_image(image_path):
if is_windows():
image_path = sanitize_path_for_windows(image_path)
return cv2.imread(image_path)
return None
def write_image(image_path : str, vision_frame : VisionFrame) -> bool:
if image_path:
if is_windows():
image_path = sanitize_path_for_windows(image_path)
return cv2.imwrite(image_path, vision_frame)
return False
@ -50,19 +55,6 @@ def restrict_image_resolution(image_path : str, resolution : Resolution) -> Reso
return resolution
def get_video_frame(video_path : str, frame_number : int = 0) -> Optional[VisionFrame]:
if is_video(video_path):
video_capture = cv2.VideoCapture(video_path)
if video_capture.isOpened():
frame_total = video_capture.get(cv2.CAP_PROP_FRAME_COUNT)
video_capture.set(cv2.CAP_PROP_POS_FRAMES, min(frame_total, frame_number - 1))
has_vision_frame, vision_frame = video_capture.read()
video_capture.release()
if has_vision_frame:
return vision_frame
return None
def create_image_resolutions(resolution : Resolution) -> List[str]:
resolutions = []
temp_resolutions = []
@ -78,8 +70,25 @@ def create_image_resolutions(resolution : Resolution) -> List[str]:
return resolutions
def get_video_frame(video_path : str, frame_number : int = 0) -> Optional[VisionFrame]:
if is_video(video_path):
if is_windows():
video_path = sanitize_path_for_windows(video_path)
video_capture = cv2.VideoCapture(video_path)
if video_capture.isOpened():
frame_total = video_capture.get(cv2.CAP_PROP_FRAME_COUNT)
video_capture.set(cv2.CAP_PROP_POS_FRAMES, min(frame_total, frame_number - 1))
has_vision_frame, vision_frame = video_capture.read()
video_capture.release()
if has_vision_frame:
return vision_frame
return None
def count_video_frame_total(video_path : str) -> int:
if is_video(video_path):
if is_windows():
video_path = sanitize_path_for_windows(video_path)
video_capture = cv2.VideoCapture(video_path)
if video_capture.isOpened():
video_frame_total = int(video_capture.get(cv2.CAP_PROP_FRAME_COUNT))
@ -90,6 +99,8 @@ def count_video_frame_total(video_path : str) -> int:
def detect_video_fps(video_path : str) -> Optional[float]:
if is_video(video_path):
if is_windows():
video_path = sanitize_path_for_windows(video_path)
video_capture = cv2.VideoCapture(video_path)
if video_capture.isOpened():
video_fps = video_capture.get(cv2.CAP_PROP_FPS)
@ -108,6 +119,8 @@ def restrict_video_fps(video_path : str, fps : Fps) -> Fps:
def detect_video_resolution(video_path : str) -> Optional[Resolution]:
if is_video(video_path):
if is_windows():
video_path = sanitize_path_for_windows(video_path)
video_capture = cv2.VideoCapture(video_path)
if video_capture.isOpened():
width = video_capture.get(cv2.CAP_PROP_FRAME_WIDTH)

View File

@ -31,7 +31,7 @@ def get_voice_extractor() -> Any:
sleep(0.5)
if VOICE_EXTRACTOR is None:
model_path = MODELS.get('voice_extractor').get('path')
VOICE_EXTRACTOR = onnxruntime.InferenceSession(model_path, providers = apply_execution_provider_options(facefusion.globals.execution_providers))
VOICE_EXTRACTOR = onnxruntime.InferenceSession(model_path, providers = apply_execution_provider_options(facefusion.globals.execution_device_id, facefusion.globals.execution_providers))
return VOICE_EXTRACTOR

View File

@ -55,6 +55,7 @@ WORDING : Dict[str, Any] =\
'install_dependency': 'select the variant of {dependency} to install',
'skip_conda': 'skip the conda environment check',
# general
'config': 'choose the config file to override defaults',
'source': 'choose single or multiple source images or audios',
'target': 'choose single target image or video',
'output': 'specify the output file or directory',
@ -64,6 +65,7 @@ WORDING : Dict[str, Any] =\
'headless': 'run the program without a user interface',
'log_level': 'adjust the message severity displayed in the terminal',
# execution
'execution_device_id': 'specify the device used for processing',
'execution_providers': 'accelerate the model inference using different providers (choices: {choices}, ...)',
'execution_thread_count': 'specify the amount of parallel threads while processing',
'execution_queue_count': 'specify the amount of frames each thread is processing',
@ -115,6 +117,7 @@ WORDING : Dict[str, Any] =\
'frame_enhancer_blend': 'blend the enhanced into the previous frame',
'lip_syncer_model': 'choose the model responsible for syncing the lips',
# uis
'open_browser': 'open the browser once the program is ready',
'ui_layouts': 'launch a single or multiple UI layouts (choices: {choices}, ...)'
},
'uis':

View File

@ -2,8 +2,8 @@ filetype==1.2.0
gradio==3.50.2
numpy==1.26.4
onnx==1.16.0
onnxruntime==1.17.1
opencv-python==4.8.1.78
onnxruntime==1.17.3
opencv-python==4.9.0.80
psutil==5.9.8
tqdm==4.66.2
scipy==1.12.0
tqdm==4.66.4
scipy==1.13.0

View File

@ -12,5 +12,4 @@ def test_create_int_range() -> None:
def test_create_float_range() -> None:
assert create_float_range(0.0, 1.0, 0.5) == [ 0.0, 0.5, 1.0 ]
assert create_float_range(0.0, 0.2, 0.05) == [ 0.0, 0.05, 0.10, 0.15, 0.20 ]
assert create_float_range(0.0, 1.0, 0.05) == [ 0.0, 0.05, 0.10, 0.15, 0.20, 0.25, 0.30, 0.35, 0.40, 0.45, 0.50, 0.55, 0.60, 0.65, 0.70, 0.75, 0.80, 0.85, 0.90, 0.95, 1.0 ]

View File

@ -15,7 +15,8 @@ def test_multiple_execution_providers() -> None:
'CPUExecutionProvider',
('CUDAExecutionProvider',
{
'device_id': '1',
'cudnn_conv_algo_search': 'DEFAULT'
})
]
assert apply_execution_provider_options([ 'CPUExecutionProvider', 'CUDAExecutionProvider' ]) == execution_provider_with_options
assert apply_execution_provider_options('1', [ 'CPUExecutionProvider', 'CUDAExecutionProvider' ]) == execution_provider_with_options

View File

@ -1,7 +1,9 @@
import shutil
import pytest
from facefusion.common_helper import is_windows
from facefusion.download import conditional_download
from facefusion.filesystem import is_file, is_directory, is_audio, has_audio, is_image, has_image, is_video, filter_audio_paths, filter_image_paths, list_directory
from facefusion.filesystem import get_file_size, is_file, is_directory, is_audio, has_audio, is_image, has_image, is_video, filter_audio_paths, filter_image_paths, list_directory, sanitize_path_for_windows
@pytest.fixture(scope = 'module', autouse = True)
@ -12,6 +14,12 @@ def before_all() -> None:
'https://github.com/facefusion/facefusion-assets/releases/download/examples/source.mp3',
'https://github.com/facefusion/facefusion-assets/releases/download/examples/target-240p.mp4'
])
shutil.copyfile('.assets/examples/source.jpg', '.assets/examples/söurce.jpg')
def test_get_file_size() -> None:
assert get_file_size('.assets/examples/source.jpg') > 0
assert get_file_size('invalid') == 0
def test_is_file() -> None:
@ -74,3 +82,9 @@ def test_list_directory() -> None:
assert list_directory('.assets/examples')
assert list_directory('.assets/examples/source.jpg') is None
assert list_directory('invalid') is None
def test_sanitize_path_for_windows() -> None:
if is_windows():
assert sanitize_path_for_windows('.assets/examples/söurce.jpg') == 'ASSETS~1/examples/SURCE~1.JPG'
assert sanitize_path_for_windows('invalid') is None

View File

@ -1,9 +1,8 @@
import platform
from facefusion.common_helper import is_linux, is_macos
from facefusion.memory import limit_system_memory
def test_limit_system_memory() -> None:
assert limit_system_memory(4) is True
if platform.system().lower() == 'darwin' or platform.system().lower() == 'linux':
if is_linux() or is_macos():
assert limit_system_memory(1024) is False

View File

@ -1,10 +1,9 @@
import platform
from facefusion.common_helper import is_linux, is_macos
from facefusion.normalizer import normalize_output_path, normalize_padding, normalize_fps
def test_normalize_output_path() -> None:
if platform.system().lower() == 'linux' or platform.system().lower() == 'darwin':
if is_linux() or is_macos():
assert normalize_output_path('.assets/examples/target-240p.mp4', '.assets/examples/target-240p.mp4') == '.assets/examples/target-240p.mp4'
assert normalize_output_path('.assets/examples/target-240p.mp4', '.assets/examples').startswith('.assets/examples/target-240p')
assert normalize_output_path('.assets/examples/target-240p.mp4', '.assets/examples').endswith('.mp4')