* Improve typing for our callbacks

* Return 0 for get_download_size

* Introduce ONNX powered face enhancer

* Introduce ONNX powered face enhancer

* Introduce ONNX powered face enhancer

* Remove tile processing from frame enhancer

* Fix video compress translation for libvpx-vp9

* Allow zero values for video compression

* Develop (#134)

* Introduce model options to the frame processors

* Finish UI to select frame processors models

* Simplify frame processors options

* Fix lint in CI

* Rename all kind of settings to options

* Add blend to enhancers

* Simplify webcam mode naming

* Bypass SSL issues under Windows

* Fix blend of frame enhancer

* Massive CLI refactoring, Register and apply ARGS via the frame processors

* Refine UI theme and introduce donate button

* Update dependencies and fix cpu only torch

* Update dependencies and fix cpu only torch

* Fix theme, Fix frame_processors in headless mode

* Remove useless astype

* Disable CoreML for the ONNX face enhancer

* Disable CoreML for the ONNX face enhancer

* Predict webcam too

* Improve resize of preview

* Change output quality defaults, Move options to the right

* Support for codeformer model

* Update the typo

* Add GPEN and GFPGAN 1.2

* Extract blend_frame methods

* Extend the installer

* Revert broken Gradio

* Rework on ui components

* Move output path selector to the output options

* Remove tons of pointless component updates

* Reset more base theme styling

* Use latest Gradio

* Fix the sliders

* More styles

* Update torch to 2.1.0

* Add RealESRNet_x4plus

* Fix that button

* Use latest onnxruntime-silicon

* Looks stable to me

* Lowercase model keys, Update preview and readme
This commit is contained in:
Henry Ruhs 2023-10-09 10:16:13 +02:00 committed by GitHub
parent 3e361e7701
commit a6809c3ccb
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
53 changed files with 1105 additions and 563 deletions

BIN
.github/preview.png vendored

Binary file not shown.

Before

Width:  |  Height:  |  Size: 1.1 MiB

After

Width:  |  Height:  |  Size: 1.1 MiB

View File

@ -30,6 +30,6 @@ jobs:
uses: actions/setup-python@v2
with:
python-version: '3.10'
- run: pip install -r requirements.txt
- run: python install.py --torch cpu --onnxruntime cpu
- run: pip install pytest
- run: pytest

View File

@ -29,15 +29,24 @@ Run the command:
```
python run.py [options]
options:
-h, --help show this help message and exit
-s SOURCE_PATH, --source SOURCE_PATH select a source image
-t TARGET_PATH, --target TARGET_PATH select a target image or video
-o OUTPUT_PATH, --output OUTPUT_PATH specify the output file or directory
--frame-processors FRAME_PROCESSORS [FRAME_PROCESSORS ...] choose from the available frame processors (choices: face_enhancer, face_swapper, frame_enhancer, ...)
--ui-layouts UI_LAYOUTS [UI_LAYOUTS ...] choose from the available ui layouts (choices: benchmark, webcam, default, ...)
--keep-fps preserve the frames per second (fps) of the target
--keep-temp retain temporary frames after processing
--skip-audio omit audio from the target
-v, --version show program's version number and exit
misc:
--skip-download omit automate downloads and lookups
--headless run the program in headless mode
execution:
--execution-providers {cpu} [{cpu} ...] choose from the available execution providers (choices: cpu, ...)
--execution-thread-count EXECUTION_THREAD_COUNT specify the number of execution threads
--execution-queue-count EXECUTION_QUEUE_COUNT specify the number of execution queries
--max-memory MAX_MEMORY specify the maximum amount of ram to be used (in gb)
face recognition:
--face-recognition {reference,many} specify the method for face recognition
--face-analyser-direction {left-right,right-left,top-bottom,bottom-top,small-large,large-small} specify the direction used for face analysis
--face-analyser-age {child,teen,adult,senior} specify the age used for face analysis
@ -45,20 +54,31 @@ python run.py [options]
--reference-face-position REFERENCE_FACE_POSITION specify the position of the reference face
--reference-face-distance REFERENCE_FACE_DISTANCE specify the distance between the reference face and the target face
--reference-frame-number REFERENCE_FRAME_NUMBER specify the number of the reference frame
frame extraction:
--trim-frame-start TRIM_FRAME_START specify the start frame for extraction
--trim-frame-end TRIM_FRAME_END specify the end frame for extraction
--temp-frame-format {jpg,png} specify the image format used for frame extraction
--temp-frame-quality [0-100] specify the image quality used for frame extraction
--keep-temp retain temporary frames after processing
output creation:
--output-image-quality [0-100] specify the quality used for the output image
--output-video-encoder {libx264,libx265,libvpx-vp9,h264_nvenc,hevc_nvenc} specify the encoder used for the output video
--output-video-quality [0-100] specify the quality used for the output video
--max-memory MAX_MEMORY specify the maximum amount of ram to be used (in gb)
--execution-providers {cpu} [{cpu} ...] choose from the available execution providers (choices: cpu, ...)
--execution-thread-count EXECUTION_THREAD_COUNT specify the number of execution threads
--execution-queue-count EXECUTION_QUEUE_COUNT specify the number of execution queries
--skip-download omit automate downloads and lookups
--headless run the program in headless mode
-v, --version show program's version number and exit
--keep-fps preserve the frames per second (fps) of the target
--skip-audio omit audio from the target
frame processors:
--frame-processors FRAME_PROCESSORS [FRAME_PROCESSORS ...] choose from the available frame processors (choices: face_enhancer, face_swapper, frame_enhancer, ...)
--face-enhancer-model {codeformer,gfpgan_1.2,gfpgan_1.3,gfpgan_1.4,gpen_bfr_512} choose from the mode for the frame processor
--face-enhancer-blend [0-100] specify the blend factor for the frame processor
--face-swapper-model {inswapper_128,inswapper_128_fp16} choose from the mode for the frame processor
--frame-enhancer-model {realesrgan_x2plus,realesrgan_x4plus,realesrnet_x4plus} choose from the mode for the frame processor
--frame-enhancer-blend [0-100] specify the blend factor for the frame processor
uis:
--ui-layouts UI_LAYOUTS [UI_LAYOUTS ...] choose from the available ui layouts (choices: benchmark, webcam, default, ...)
```

View File

@ -2,10 +2,9 @@ from typing import List
from facefusion.typing import FaceRecognition, FaceAnalyserDirection, FaceAnalyserAge, FaceAnalyserGender, TempFrameFormat, OutputVideoEncoder
face_recognition : List[FaceRecognition] = [ 'reference', 'many' ]
face_analyser_direction : List[FaceAnalyserDirection] = [ 'left-right', 'right-left', 'top-bottom', 'bottom-top', 'small-large', 'large-small' ]
face_analyser_age : List[FaceAnalyserAge] = [ 'child', 'teen', 'adult', 'senior' ]
face_analyser_gender : List[FaceAnalyserGender] = [ 'male', 'female' ]
temp_frame_format : List[TempFrameFormat] = [ 'jpg', 'png' ]
output_video_encoder : List[OutputVideoEncoder] = [ 'libx264', 'libx265', 'libvpx-vp9', 'h264_nvenc', 'hevc_nvenc' ]
face_recognitions : List[FaceRecognition] = [ 'reference', 'many' ]
face_analyser_directions : List[FaceAnalyserDirection] = [ 'left-right', 'right-left', 'top-bottom', 'bottom-top', 'small-large', 'large-small' ]
face_analyser_ages : List[FaceAnalyserAge] = [ 'child', 'teen', 'adult', 'senior' ]
face_analyser_genders : List[FaceAnalyserGender] = [ 'male', 'female' ]
temp_frame_formats : List[TempFrameFormat] = [ 'jpg', 'png' ]
output_video_encoders : List[OutputVideoEncoder] = [ 'libx264', 'libx265', 'libvpx-vp9', 'h264_nvenc', 'hevc_nvenc' ]

View File

@ -1,111 +1,158 @@
#!/usr/bin/env python3
import os
# single thread doubles cuda performance
os.environ['OMP_NUM_THREADS'] = '1'
# reduce tensorflow log level
os.environ['TF_CPP_MIN_LOG_LEVEL'] = '2'
import signal
import sys
import warnings
from typing import List
import platform
import signal
import shutil
import argparse
import onnxruntime
import tensorflow
from argparse import ArgumentParser, HelpFormatter
import facefusion.choices
import facefusion.globals
from facefusion import wording, metadata
from facefusion import metadata, wording
from facefusion.predictor import predict_image, predict_video
from facefusion.processors.frame.core import get_frame_processors_modules
from facefusion.utilities import is_image, is_video, detect_fps, compress_image, merge_video, extract_frames, get_temp_frame_paths, restore_audio, create_temp, move_temp, clear_temp, normalize_output_path, list_module_names, decode_execution_providers, encode_execution_providers
from facefusion.processors.frame.core import get_frame_processors_modules, load_frame_processor_module
from facefusion.utilities import is_image, is_video, detect_fps, compress_image, merge_video, extract_frames, get_temp_frame_paths, restore_audio, create_temp, move_temp, clear_temp, list_module_names, encode_execution_providers, decode_execution_providers, normalize_output_path
warnings.filterwarnings('ignore', category = FutureWarning, module = 'insightface')
warnings.filterwarnings('ignore', category = UserWarning, module = 'torchvision')
def parse_args() -> None:
def cli() -> None:
signal.signal(signal.SIGINT, lambda signal_number, frame: destroy())
program = argparse.ArgumentParser(formatter_class = lambda prog: argparse.HelpFormatter(prog, max_help_position = 120))
program = ArgumentParser(formatter_class = lambda prog: HelpFormatter(prog, max_help_position = 120), add_help = False)
# general
program.add_argument('-s', '--source', help = wording.get('source_help'), dest = 'source_path')
program.add_argument('-t', '--target', help = wording.get('target_help'), dest = 'target_path')
program.add_argument('-o', '--output', help = wording.get('output_help'), dest = 'output_path')
program.add_argument('--frame-processors', help = wording.get('frame_processors_help').format(choices = ', '.join(list_module_names('facefusion/processors/frame/modules'))), dest = 'frame_processors', default = ['face_swapper'], nargs = '+')
program.add_argument('--ui-layouts', help = wording.get('ui_layouts_help').format(choices = ', '.join(list_module_names('facefusion/uis/layouts'))), dest = 'ui_layouts', default = ['default'], nargs = '+')
program.add_argument('--keep-fps', help = wording.get('keep_fps_help'), dest = 'keep_fps', action = 'store_true')
program.add_argument('--keep-temp', help = wording.get('keep_temp_help'), dest = 'keep_temp', action = 'store_true')
program.add_argument('--skip-audio', help = wording.get('skip_audio_help'), dest = 'skip_audio', action = 'store_true')
program.add_argument('--face-recognition', help = wording.get('face_recognition_help'), dest = 'face_recognition', default = 'reference', choices = facefusion.choices.face_recognition)
program.add_argument('--face-analyser-direction', help = wording.get('face_analyser_direction_help'), dest = 'face_analyser_direction', default = 'left-right', choices = facefusion.choices.face_analyser_direction)
program.add_argument('--face-analyser-age', help = wording.get('face_analyser_age_help'), dest = 'face_analyser_age', choices = facefusion.choices.face_analyser_age)
program.add_argument('--face-analyser-gender', help = wording.get('face_analyser_gender_help'), dest = 'face_analyser_gender', choices = facefusion.choices.face_analyser_gender)
program.add_argument('--reference-face-position', help = wording.get('reference_face_position_help'), dest = 'reference_face_position', type = int, default = 0)
program.add_argument('--reference-face-distance', help = wording.get('reference_face_distance_help'), dest = 'reference_face_distance', type = float, default = 1.5)
program.add_argument('--reference-frame-number', help = wording.get('reference_frame_number_help'), dest = 'reference_frame_number', type = int, default = 0)
program.add_argument('--trim-frame-start', help = wording.get('trim_frame_start_help'), dest = 'trim_frame_start', type = int)
program.add_argument('--trim-frame-end', help = wording.get('trim_frame_end_help'), dest = 'trim_frame_end', type = int)
program.add_argument('--temp-frame-format', help = wording.get('temp_frame_format_help'), dest = 'temp_frame_format', default = 'jpg', choices = facefusion.choices.temp_frame_format)
program.add_argument('--temp-frame-quality', help = wording.get('temp_frame_quality_help'), dest = 'temp_frame_quality', type = int, default = 100, choices = range(101), metavar = '[0-100]')
program.add_argument('--output-image-quality', help=wording.get('output_image_quality_help'), dest = 'output_image_quality', type = int, default = 90, choices = range(101), metavar = '[0-100]')
program.add_argument('--output-video-encoder', help = wording.get('output_video_encoder_help'), dest = 'output_video_encoder', default = 'libx264', choices = facefusion.choices.output_video_encoder)
program.add_argument('--output-video-quality', help = wording.get('output_video_quality_help'), dest = 'output_video_quality', type = int, default = 90, choices = range(101), metavar = '[0-100]')
program.add_argument('--max-memory', help = wording.get('max_memory_help'), dest = 'max_memory', type = int)
program.add_argument('--execution-providers', help = wording.get('execution_providers_help').format(choices = 'cpu'), dest = 'execution_providers', default = ['cpu'], choices = suggest_execution_providers_choices(), nargs = '+')
program.add_argument('--execution-thread-count', help = wording.get('execution_thread_count_help'), dest = 'execution_thread_count', type = int, default = suggest_execution_thread_count_default())
program.add_argument('--execution-queue-count', help = wording.get('execution_queue_count_help'), dest = 'execution_queue_count', type = int, default = 1)
program.add_argument('--skip-download', help = wording.get('skip_download_help'), dest = 'skip_download', action = 'store_true')
program.add_argument('--headless', help = wording.get('headless_help'), dest = 'headless', action = 'store_true')
program.add_argument('-v', '--version', version = metadata.get('name') + ' ' + metadata.get('version'), action = 'version')
# misc
group_misc = program.add_argument_group('misc')
group_misc.add_argument('--skip-download', help = wording.get('skip_download_help'), dest = 'skip_download', action = 'store_true')
group_misc.add_argument('--headless', help = wording.get('headless_help'), dest = 'headless', action = 'store_true')
# execution
group_execution = program.add_argument_group('execution')
group_execution.add_argument('--execution-providers', help = wording.get('execution_providers_help').format(choices = 'cpu'), dest = 'execution_providers', default = [ 'cpu' ], choices = encode_execution_providers(onnxruntime.get_available_providers()), nargs = '+')
group_execution.add_argument('--execution-thread-count', help = wording.get('execution_thread_count_help'), dest = 'execution_thread_count', type = int, default = 1)
group_execution.add_argument('--execution-queue-count', help = wording.get('execution_queue_count_help'), dest = 'execution_queue_count', type = int, default = 1)
group_execution.add_argument('--max-memory', help=wording.get('max_memory_help'), dest='max_memory', type = int)
# face recognition
group_face_recognition = program.add_argument_group('face recognition')
group_face_recognition.add_argument('--face-recognition', help = wording.get('face_recognition_help'), dest = 'face_recognition', default = 'reference', choices = facefusion.choices.face_recognitions)
group_face_recognition.add_argument('--face-analyser-direction', help = wording.get('face_analyser_direction_help'), dest = 'face_analyser_direction', default = 'left-right', choices = facefusion.choices.face_analyser_directions)
group_face_recognition.add_argument('--face-analyser-age', help = wording.get('face_analyser_age_help'), dest = 'face_analyser_age', choices = facefusion.choices.face_analyser_ages)
group_face_recognition.add_argument('--face-analyser-gender', help = wording.get('face_analyser_gender_help'), dest = 'face_analyser_gender', choices = facefusion.choices.face_analyser_genders)
group_face_recognition.add_argument('--reference-face-position', help = wording.get('reference_face_position_help'), dest = 'reference_face_position', type = int, default = 0)
group_face_recognition.add_argument('--reference-face-distance', help = wording.get('reference_face_distance_help'), dest = 'reference_face_distance', type = float, default = 1.5)
group_face_recognition.add_argument('--reference-frame-number', help = wording.get('reference_frame_number_help'), dest = 'reference_frame_number', type = int, default = 0)
# frame extraction
group_processing = program.add_argument_group('frame extraction')
group_processing.add_argument('--trim-frame-start', help = wording.get('trim_frame_start_help'), dest = 'trim_frame_start', type = int)
group_processing.add_argument('--trim-frame-end', help = wording.get('trim_frame_end_help'), dest = 'trim_frame_end', type = int)
group_processing.add_argument('--temp-frame-format', help = wording.get('temp_frame_format_help'), dest = 'temp_frame_format', default = 'jpg', choices = facefusion.choices.temp_frame_formats)
group_processing.add_argument('--temp-frame-quality', help = wording.get('temp_frame_quality_help'), dest = 'temp_frame_quality', type = int, default = 100, choices = range(101), metavar = '[0-100]')
group_processing.add_argument('--keep-temp', help = wording.get('keep_temp_help'), dest = 'keep_temp', action = 'store_true')
# output creation
group_output = program.add_argument_group('output creation')
group_output.add_argument('--output-image-quality', help=wording.get('output_image_quality_help'), dest = 'output_image_quality', type = int, default = 80, choices = range(101), metavar = '[0-100]')
group_output.add_argument('--output-video-encoder', help = wording.get('output_video_encoder_help'), dest = 'output_video_encoder', default = 'libx264', choices = facefusion.choices.output_video_encoders)
group_output.add_argument('--output-video-quality', help = wording.get('output_video_quality_help'), dest = 'output_video_quality', type = int, default = 80, choices = range(101), metavar = '[0-100]')
group_output.add_argument('--keep-fps', help = wording.get('keep_fps_help'), dest = 'keep_fps', action = 'store_true')
group_output.add_argument('--skip-audio', help = wording.get('skip_audio_help'), dest = 'skip_audio', action = 'store_true')
# frame processors
available_frame_processors = list_module_names('facefusion/processors/frame/modules')
program = ArgumentParser(parents = [ program ], formatter_class = program.formatter_class, add_help = True)
group_frame_processors = program.add_argument_group('frame processors')
group_frame_processors.add_argument('--frame-processors', help = wording.get('frame_processors_help').format(choices = ', '.join(available_frame_processors)), dest = 'frame_processors', default = [ 'face_swapper' ], nargs = '+')
for frame_processor in available_frame_processors:
frame_processor_module = load_frame_processor_module(frame_processor)
frame_processor_module.register_args(group_frame_processors)
# uis
group_uis = program.add_argument_group('uis')
group_uis.add_argument('--ui-layouts', help = wording.get('ui_layouts_help').format(choices = ', '.join(list_module_names('facefusion/uis/layouts'))), dest = 'ui_layouts', default = [ 'default' ], nargs = '+')
run(program)
def apply_args(program : ArgumentParser) -> None:
args = program.parse_args()
# general
facefusion.globals.source_path = args.source_path
facefusion.globals.target_path = args.target_path
facefusion.globals.output_path = normalize_output_path(facefusion.globals.source_path, facefusion.globals.target_path, args.output_path)
facefusion.globals.frame_processors = args.frame_processors
facefusion.globals.ui_layouts = args.ui_layouts
facefusion.globals.keep_fps = args.keep_fps
facefusion.globals.keep_temp = args.keep_temp
facefusion.globals.skip_audio = args.skip_audio
# misc
facefusion.globals.skip_download = args.skip_download
facefusion.globals.headless = args.headless
# execution
facefusion.globals.execution_providers = decode_execution_providers(args.execution_providers)
facefusion.globals.execution_thread_count = args.execution_thread_count
facefusion.globals.execution_queue_count = args.execution_queue_count
facefusion.globals.max_memory = args.max_memory
# face recognition
facefusion.globals.face_recognition = args.face_recognition
facefusion.globals.face_analyser_direction = args.face_analyser_direction
facefusion.globals.face_analyser_age = args.face_analyser_age
facefusion.globals.face_analyser_gender = args.face_analyser_gender
facefusion.globals.reference_face_position = args.reference_face_position
facefusion.globals.reference_frame_number = args.reference_frame_number
facefusion.globals.reference_face_distance = args.reference_face_distance
facefusion.globals.reference_frame_number = args.reference_frame_number
# frame extraction
facefusion.globals.trim_frame_start = args.trim_frame_start
facefusion.globals.trim_frame_end = args.trim_frame_end
facefusion.globals.temp_frame_format = args.temp_frame_format
facefusion.globals.temp_frame_quality = args.temp_frame_quality
facefusion.globals.keep_temp = args.keep_temp
# output creation
facefusion.globals.output_image_quality = args.output_image_quality
facefusion.globals.output_video_encoder = args.output_video_encoder
facefusion.globals.output_video_quality = args.output_video_quality
facefusion.globals.max_memory = args.max_memory
facefusion.globals.execution_providers = decode_execution_providers(args.execution_providers)
facefusion.globals.execution_thread_count = args.execution_thread_count
facefusion.globals.execution_queue_count = args.execution_queue_count
facefusion.globals.skip_download = args.skip_download
facefusion.globals.headless = args.headless
facefusion.globals.keep_fps = args.keep_fps
facefusion.globals.skip_audio = args.skip_audio
# frame processors
available_frame_processors = list_module_names('facefusion/processors/frame/modules')
facefusion.globals.frame_processors = args.frame_processors
for frame_processor in available_frame_processors:
frame_processor_module = load_frame_processor_module(frame_processor)
frame_processor_module.apply_args(program)
# uis
facefusion.globals.ui_layouts = args.ui_layouts
def suggest_execution_providers_choices() -> List[str]:
return encode_execution_providers(onnxruntime.get_available_providers())
def run(program : ArgumentParser) -> None:
apply_args(program)
limit_resources()
if not pre_check():
return
for frame_processor_module in get_frame_processors_modules(facefusion.globals.frame_processors):
if not frame_processor_module.pre_check():
return
if facefusion.globals.headless:
conditional_process()
else:
import facefusion.uis.core as ui
for ui_layout in ui.get_ui_layouts_modules(facefusion.globals.ui_layouts):
if not ui_layout.pre_check():
return
ui.launch()
def suggest_execution_thread_count_default() -> int:
if 'CUDAExecutionProvider' in onnxruntime.get_available_providers():
return 8
return 1
def destroy() -> None:
if facefusion.globals.target_path:
clear_temp(facefusion.globals.target_path)
sys.exit()
def limit_resources() -> None:
# prevent tensorflow memory leak
gpus = tensorflow.config.experimental.list_physical_devices('GPU')
for gpu in gpus:
tensorflow.config.experimental.set_virtual_device_configuration(gpu, [
tensorflow.config.experimental.set_virtual_device_configuration(gpu,
[
tensorflow.config.experimental.VirtualDeviceConfiguration(memory_limit = 512)
])
# limit memory usage
@ -122,10 +169,6 @@ def limit_resources() -> None:
resource.setrlimit(resource.RLIMIT_DATA, (memory, memory))
def update_status(message : str, scope : str = 'FACEFUSION.CORE') -> None:
print('[' + scope + '] ' + message)
def pre_check() -> bool:
if sys.version_info < (3, 9):
update_status(wording.get('python_not_supported').format(version = '3.9'))
@ -136,6 +179,16 @@ def pre_check() -> bool:
return True
def conditional_process() -> None:
for frame_processor_module in get_frame_processors_modules(facefusion.globals.frame_processors):
if not frame_processor_module.pre_process('output'):
return
if is_image(facefusion.globals.target_path):
process_image()
if is_video(facefusion.globals.target_path):
process_video()
def process_image() -> None:
if predict_image(facefusion.globals.target_path):
return
@ -160,6 +213,7 @@ def process_video() -> None:
if predict_video(facefusion.globals.target_path):
return
fps = detect_fps(facefusion.globals.target_path) if facefusion.globals.keep_fps else 25.0
# create temp
update_status(wording.get('creating_temp'))
create_temp(facefusion.globals.target_path)
# extract frames
@ -199,39 +253,5 @@ def process_video() -> None:
update_status(wording.get('processing_video_failed'))
def conditional_process() -> None:
for frame_processor_module in get_frame_processors_modules(facefusion.globals.frame_processors):
if not frame_processor_module.pre_process('output'):
return
if is_image(facefusion.globals.target_path):
process_image()
if is_video(facefusion.globals.target_path):
process_video()
def run() -> None:
parse_args()
limit_resources()
# pre check
if not pre_check():
return
for frame_processor in get_frame_processors_modules(facefusion.globals.frame_processors):
if not frame_processor.pre_check():
return
# headless or ui
if facefusion.globals.headless:
conditional_process()
else:
import facefusion.uis.core as ui
# pre check
for ui_layout in ui.get_ui_layouts_modules(facefusion.globals.ui_layouts):
if not ui_layout.pre_check():
return
ui.launch()
def destroy() -> None:
if facefusion.globals.target_path:
clear_temp(facefusion.globals.target_path)
sys.exit()
def update_status(message : str, scope : str = 'FACEFUSION.CORE') -> None:
print('[' + scope + '] ' + message)

View File

@ -2,31 +2,39 @@ from typing import List, Optional
from facefusion.typing import FaceRecognition, FaceAnalyserDirection, FaceAnalyserAge, FaceAnalyserGender, TempFrameFormat, OutputVideoEncoder
# general
source_path : Optional[str] = None
target_path : Optional[str] = None
output_path : Optional[str] = None
frame_processors : List[str] = []
ui_layouts : List[str] = []
keep_fps : Optional[bool] = None
keep_temp : Optional[bool] = None
skip_audio : Optional[bool] = None
# misc
skip_download : Optional[bool] = None
headless : Optional[bool] = None
# execution
execution_providers : List[str] = []
execution_thread_count : Optional[int] = None
execution_queue_count : Optional[int] = None
max_memory : Optional[int] = None
# face recognition
face_recognition : Optional[FaceRecognition] = None
face_analyser_direction : Optional[FaceAnalyserDirection] = None
face_analyser_age : Optional[FaceAnalyserAge] = None
face_analyser_gender : Optional[FaceAnalyserGender] = None
reference_face_position : Optional[int] = None
reference_frame_number : Optional[int] = None
reference_face_distance : Optional[float] = None
reference_frame_number : Optional[int] = None
# frame extraction
trim_frame_start : Optional[int] = None
trim_frame_end : Optional[int] = None
temp_frame_format : Optional[TempFrameFormat] = None
temp_frame_quality : Optional[int] = None
keep_temp : Optional[bool] = None
# output creation
output_image_quality : Optional[int] = None
output_video_encoder : Optional[OutputVideoEncoder] = None
output_video_quality : Optional[int] = None
max_memory : Optional[int] = None
execution_providers : List[str] = []
execution_thread_count : Optional[int] = None
execution_queue_count : Optional[int] = None
skip_download : Optional[bool] = None
headless : Optional[bool] = None
keep_fps : Optional[bool] = None
skip_audio : Optional[bool] = None
# frame processors
frame_processors : List[str] = []
# uis
ui_layouts : List[str] = []

View File

@ -1,9 +1,6 @@
from typing import Dict, Tuple
import argparse
import os
import sys
import subprocess
import tempfile
from argparse import ArgumentParser, HelpFormatter
subprocess.call([ 'pip', 'install' , 'inquirer', '-q' ])
@ -11,55 +8,61 @@ import inquirer
from facefusion import metadata, wording
TORCH : Dict[str, str] =\
{
'cpu': 'https://download.pytorch.org/whl/cpu',
'cuda': 'https://download.pytorch.org/whl/cu118',
'rocm': 'https://download.pytorch.org/whl/rocm5.6'
}
ONNXRUNTIMES : Dict[str, Tuple[str, str]] =\
{
'cpu': ('onnxruntime', '1.16.0 '),
'cuda': ('onnxruntime-gpu', '1.16.0'),
'coreml-legacy': ('onnxruntime-coreml', '1.13.1'),
'coreml-silicon': ('onnxruntime-silicon', '1.14.2'),
'coreml-silicon': ('onnxruntime-silicon', '1.16.0'),
'directml': ('onnxruntime-directml', '1.16.0'),
'openvino': ('onnxruntime-openvino', '1.15.0')
}
def run() -> None:
program = argparse.ArgumentParser(formatter_class = lambda prog: argparse.HelpFormatter(prog, max_help_position = 120))
program.add_argument('--onnxruntime', help = wording.get('onnxruntime_help'), dest = 'onnxruntime', choices = ONNXRUNTIMES.keys())
def cli() -> None:
program = ArgumentParser(formatter_class = lambda prog: HelpFormatter(prog, max_help_position = 120))
program.add_argument('--torch', help = wording.get('install_dependency_help').format(dependency = 'torch'), dest = 'torch', choices = TORCH.keys())
program.add_argument('--onnxruntime', help = wording.get('install_dependency_help').format(dependency = 'onnxruntime'), dest = 'onnxruntime', choices = ONNXRUNTIMES.keys())
program.add_argument('-v', '--version', version = metadata.get('name') + ' ' + metadata.get('version'), action = 'version')
run(program)
def run(program : ArgumentParser) -> None:
args = program.parse_args()
if args.onnxruntime:
answers =\
{
'torch': args.torch,
'onnxruntime': args.onnxruntime
}
else:
answers = inquirer.prompt(
[
inquirer.List(
'torch',
message = wording.get('install_dependency_help').format(dependency = 'torch'),
choices = list(TORCH.keys())
),
inquirer.List(
'onnxruntime',
message = wording.get('onnxruntime_help'),
message = wording.get('install_dependency_help').format(dependency = 'onnxruntime'),
choices = list(ONNXRUNTIMES.keys())
)
])
if answers is not None:
torch = answers['torch']
torch_url = TORCH[torch]
onnxruntime = answers['onnxruntime']
onnxruntime_name, onnxruntime_version = ONNXRUNTIMES[onnxruntime]
python_id = 'cp' + str(sys.version_info.major) + str(sys.version_info.minor)
subprocess.call([ 'pip', 'uninstall', 'torch', '-y' ])
if onnxruntime == 'cuda':
subprocess.call([ 'pip', 'install', '-r', 'requirements.txt', '--extra-index-url', 'https://download.pytorch.org/whl/cu118' ])
else:
subprocess.call([ 'pip', 'install', '-r', 'requirements.txt' ])
subprocess.call([ 'pip', 'install', '-r', 'requirements.txt', '--extra-index-url', torch_url ])
if onnxruntime != 'cpu':
subprocess.call([ 'pip', 'uninstall', 'onnxruntime', onnxruntime_name, '-y' ])
if onnxruntime != 'coreml-silicon':
subprocess.call([ 'pip', 'install', onnxruntime_name + '==' + onnxruntime_version ])
elif python_id in [ 'cp39', 'cp310', 'cp311' ]:
wheel_name = '-'.join([ 'onnxruntime_silicon', onnxruntime_version, python_id, python_id, 'macosx_12_0_arm64.whl' ])
wheel_path = os.path.join(tempfile.gettempdir(), wheel_name)
wheel_url = 'https://github.com/cansik/onnxruntime-silicon/releases/download/v' + onnxruntime_version + '/' + wheel_name
subprocess.call([ 'curl', '--silent', '--location', '--continue-at', '-', '--output', wheel_path, wheel_url ])
subprocess.call([ 'pip', 'install', wheel_path ])
os.remove(wheel_path)

View File

@ -2,7 +2,7 @@ METADATA =\
{
'name': 'FaceFusion',
'description': 'Next generation face swapper and enhancer',
'version': '1.2.1',
'version': '1.3.0',
'license': 'MIT',
'author': 'Henry Ruhs',
'url': 'https://facefusion.io'

View File

@ -1,5 +1,6 @@
import threading
from functools import lru_cache
import numpy
import opennsfw2
from PIL import Image
@ -10,6 +11,8 @@ from facefusion.typing import Frame
PREDICTOR = None
THREAD_LOCK : threading.Lock = threading.Lock()
MAX_PROBABILITY = 0.75
FRAME_INTERVAL = 25
STREAM_COUNTER = 0
def get_predictor() -> Model:
@ -27,8 +30,17 @@ def clear_predictor() -> None:
PREDICTOR = None
def predict_frame(target_frame : Frame) -> bool:
image = Image.fromarray(target_frame)
def predict_stream(frame : Frame) -> bool:
global STREAM_COUNTER
STREAM_COUNTER = STREAM_COUNTER + 1
if STREAM_COUNTER % FRAME_INTERVAL == 0:
return predict_frame(frame)
return False
def predict_frame(frame : Frame) -> bool:
image = Image.fromarray(frame)
image = opennsfw2.preprocess_image(image, opennsfw2.Preprocessing.YAHOO)
views = numpy.expand_dims(image, axis = 0)
_, probability = get_predictor().predict(views)[0]
@ -42,5 +54,5 @@ def predict_image(image_path : str) -> bool:
@lru_cache(maxsize = None)
def predict_video(video_path : str) -> bool:
_, probabilities = opennsfw2.predict_video_frames(video_path = video_path, frame_interval = 25)
_, probabilities = opennsfw2.predict_video_frames(video_path = video_path, frame_interval = FRAME_INTERVAL)
return any(probability > MAX_PROBABILITY for probability in probabilities)

View File

@ -0,0 +1,5 @@
from typing import List
face_swapper_models : List[str] = [ 'inswapper_128', 'inswapper_128_fp16' ]
face_enhancer_models : List[str] = [ 'codeformer', 'gfpgan_1.2', 'gfpgan_1.3', 'gfpgan_1.4', 'gpen_bfr_512' ]
frame_enhancer_models : List[str] = [ 'realesrgan_x2plus', 'realesrgan_x4plus', 'realesrnet_x4plus' ]

View File

@ -5,17 +5,22 @@ import psutil
from concurrent.futures import ThreadPoolExecutor, as_completed
from queue import Queue
from types import ModuleType
from typing import Any, List, Callable
from typing import Any, List
from tqdm import tqdm
import facefusion.globals
from facefusion import wording
from facefusion.typing import Process_Frames
FRAME_PROCESSORS_MODULES : List[ModuleType] = []
FRAME_PROCESSORS_METHODS =\
[
'get_frame_processor',
'clear_frame_processor',
'get_options',
'set_options',
'register_args',
'apply_args',
'pre_check',
'pre_process',
'process_frame',
@ -57,7 +62,7 @@ def clear_frame_processors_modules() -> None:
FRAME_PROCESSORS_MODULES = []
def multi_process_frames(source_path : str, temp_frame_paths : List[str], process_frames : Callable[[str, List[str], Callable[[], None]], None]) -> None:
def multi_process_frames(source_path : str, temp_frame_paths : List[str], process_frames : Process_Frames) -> None:
progress_bar_format = '{l_bar}{bar}| {n_fmt}/{total_fmt} [{elapsed}<{remaining}, {rate_fmt}{postfix}]'
with tqdm(total = len(temp_frame_paths), desc = wording.get('processing'), unit = 'frame', dynamic_ncols = True, bar_format = progress_bar_format) as progress:
with ThreadPoolExecutor(max_workers = facefusion.globals.execution_thread_count) as executor:

View File

@ -0,0 +1,7 @@
from typing import Optional
face_swapper_model : Optional[str] = None
face_enhancer_model : Optional[str] = None
face_enhancer_blend : Optional[int] = None
frame_enhancer_model : Optional[str] = None
frame_enhancer_blend : Optional[int] = None

View File

@ -1,21 +1,53 @@
from typing import Any, List, Callable
from typing import Any, List, Tuple, Dict, Literal, Optional
from argparse import ArgumentParser
import cv2
import threading
from gfpgan.utils import GFPGANer
import numpy
import onnxruntime
import facefusion.globals
from facefusion import wording, utilities
from facefusion import wording
from facefusion.core import update_status
from facefusion.face_analyser import get_many_faces, clear_face_analyser
from facefusion.typing import Frame, Face, ProcessMode
from facefusion.typing import Face, Frame, Matrix, Update_Process, ProcessMode, ModelValue, OptionsWithModel
from facefusion.utilities import conditional_download, resolve_relative_path, is_image, is_video, is_file, is_download_done
from facefusion.vision import read_image, read_static_image, write_image
from facefusion.processors.frame import globals as frame_processors_globals
from facefusion.processors.frame import choices as frame_processors_choices
FRAME_PROCESSOR = None
THREAD_SEMAPHORE : threading.Semaphore = threading.Semaphore()
THREAD_LOCK : threading.Lock = threading.Lock()
NAME = 'FACEFUSION.FRAME_PROCESSOR.FACE_ENHANCER'
MODEL_URL = 'https://github.com/facefusion/facefusion-assets/releases/download/models/GFPGANv1.4.pth'
MODEL_PATH = resolve_relative_path('../.assets/models/GFPGANv1.4.pth')
MODELS : Dict[str, ModelValue] =\
{
'codeformer':
{
'url': 'https://github.com/facefusion/facefusion-assets/releases/download/models/codeformer.onnx',
'path': resolve_relative_path('../.assets/models/codeformer.onnx')
},
'gfpgan_1.2':
{
'url': 'https://github.com/facefusion/facefusion-assets/releases/download/models/GFPGANv1.2.onnx',
'path': resolve_relative_path('../.assets/models/GFPGANv1.2.onnx')
},
'gfpgan_1.3':
{
'url': 'https://github.com/facefusion/facefusion-assets/releases/download/models/GFPGANv1.3.onnx',
'path': resolve_relative_path('../.assets/models/GFPGANv1.3.onnx')
},
'gfpgan_1.4':
{
'url': 'https://github.com/facefusion/facefusion-assets/releases/download/models/GFPGANv1.4.onnx',
'path': resolve_relative_path('../.assets/models/GFPGANv1.4.onnx')
},
'gpen_bfr_512':
{
'url': 'https://github.com/facefusion/facefusion-assets/releases/download/models/GPEN-BFR-512.onnx',
'path': resolve_relative_path('../.assets/models/GPEN-BFR-512.onnx')
}
}
OPTIONS : Optional[OptionsWithModel] = None
def get_frame_processor() -> Any:
@ -23,11 +55,8 @@ def get_frame_processor() -> Any:
with THREAD_LOCK:
if FRAME_PROCESSOR is None:
FRAME_PROCESSOR = GFPGANer(
model_path = MODEL_PATH,
upscale = 1,
device = utilities.get_device(facefusion.globals.execution_providers)
)
model_path = get_options('model').get('path')
FRAME_PROCESSOR = onnxruntime.InferenceSession(model_path, providers = facefusion.globals.execution_providers)
return FRAME_PROCESSOR
@ -37,18 +66,49 @@ def clear_frame_processor() -> None:
FRAME_PROCESSOR = None
def get_options(key : Literal[ 'model' ]) -> Any:
global OPTIONS
if OPTIONS is None:
OPTIONS =\
{
'model': MODELS[frame_processors_globals.face_enhancer_model]
}
return OPTIONS.get(key)
def set_options(key : Literal[ 'model' ], value : Any) -> None:
global OPTIONS
OPTIONS[key] = value
def register_args(program : ArgumentParser) -> None:
program.add_argument('--face-enhancer-model', help = wording.get('frame_processor_model_help'), dest = 'face_enhancer_model', default = 'gfpgan_1.4', choices = frame_processors_choices.face_enhancer_models)
program.add_argument('--face-enhancer-blend', help = wording.get('frame_processor_blend_help'), dest= 'face_enhancer_blend', type = int, default= 100, choices = range(101), metavar = '[0-100]')
def apply_args(program : ArgumentParser) -> None:
args = program.parse_args()
frame_processors_globals.face_enhancer_model = args.face_enhancer_model
frame_processors_globals.face_enhancer_blend = args.face_enhancer_blend
def pre_check() -> bool:
if not facefusion.globals.skip_download:
download_directory_path = resolve_relative_path('../.assets/models')
conditional_download(download_directory_path, [ MODEL_URL ])
model_url = get_options('model').get('url')
conditional_download(download_directory_path, [ model_url ])
return True
def pre_process(mode : ProcessMode) -> bool:
if not facefusion.globals.skip_download and not is_download_done(MODEL_URL, MODEL_PATH):
model_url = get_options('model').get('url')
model_path = get_options('model').get('path')
if not facefusion.globals.skip_download and not is_download_done(model_url, model_path):
update_status(wording.get('model_download_not_done') + wording.get('exclamation_mark'), NAME)
return False
elif not is_file(MODEL_PATH):
elif not is_file(model_path):
update_status(wording.get('model_file_not_present') + wording.get('exclamation_mark'), NAME)
return False
if mode in [ 'output', 'preview' ] and not is_image(facefusion.globals.target_path) and not is_video(facefusion.globals.target_path):
@ -67,21 +127,76 @@ def post_process() -> None:
def enhance_face(target_face: Face, temp_frame: Frame) -> Frame:
start_x, start_y, end_x, end_y = map(int, target_face['bbox'])
padding_x = int((end_x - start_x) * 0.5)
padding_y = int((end_y - start_y) * 0.5)
start_x = max(0, start_x - padding_x)
start_y = max(0, start_y - padding_y)
end_x = max(0, end_x + padding_x)
end_y = max(0, end_y + padding_y)
crop_frame = temp_frame[start_y:end_y, start_x:end_x]
if crop_frame.size:
frame_processor = get_frame_processor()
crop_frame, affine_matrix = warp_face(target_face, temp_frame)
crop_frame = prepare_crop_frame(crop_frame)
frame_processor_inputs = {}
for frame_processor_input in frame_processor.get_inputs():
if frame_processor_input.name == 'input':
frame_processor_inputs[frame_processor_input.name] = crop_frame
if frame_processor_input.name == 'weight':
frame_processor_inputs[frame_processor_input.name] = numpy.array([ 1 ], dtype = numpy.double)
with THREAD_SEMAPHORE:
_, _, crop_frame = get_frame_processor().enhance(
crop_frame,
paste_back = True
)
temp_frame[start_y:end_y, start_x:end_x] = crop_frame
crop_frame = frame_processor.run(None, frame_processor_inputs)[0][0]
crop_frame = normalize_crop_frame(crop_frame)
paste_frame = paste_back(temp_frame, crop_frame, affine_matrix)
temp_frame = blend_frame(temp_frame, paste_frame)
return temp_frame
def warp_face(target_face : Face, temp_frame : Frame) -> Tuple[Frame, Matrix]:
template = numpy.array(
[
[ 192.98138, 239.94708 ],
[ 318.90277, 240.1936 ],
[ 256.63416, 314.01935 ],
[ 201.26117, 371.41043 ],
[ 313.08905, 371.15118 ]
])
affine_matrix = cv2.estimateAffinePartial2D(target_face['kps'], template, method = cv2.LMEDS)[0]
crop_frame = cv2.warpAffine(temp_frame, affine_matrix, (512, 512))
return crop_frame, affine_matrix
def prepare_crop_frame(crop_frame : Frame) -> Frame:
crop_frame = crop_frame[:, :, ::-1] / 255.0
crop_frame = (crop_frame - 0.5) / 0.5
crop_frame = numpy.expand_dims(crop_frame.transpose(2, 0, 1), axis = 0).astype(numpy.float32)
return crop_frame
def normalize_crop_frame(crop_frame : Frame) -> Frame:
crop_frame = numpy.clip(crop_frame, -1, 1)
crop_frame = (crop_frame + 1) / 2
crop_frame = crop_frame.transpose(1, 2, 0)
crop_frame = (crop_frame * 255.0).round()
crop_frame = crop_frame.astype(numpy.uint8)[:, :, ::-1]
return crop_frame
def paste_back(temp_frame : Frame, crop_frame : Frame, affine_matrix : Matrix) -> Frame:
inverse_affine_matrix = cv2.invertAffineTransform(affine_matrix)
temp_frame_height, temp_frame_width = temp_frame.shape[0:2]
crop_frame_height, crop_frame_width = crop_frame.shape[0:2]
inverse_crop_frame = cv2.warpAffine(crop_frame, inverse_affine_matrix, (temp_frame_width, temp_frame_height))
inverse_mask = numpy.ones((crop_frame_height, crop_frame_width, 3), dtype = numpy.float32)
inverse_mask_frame = cv2.warpAffine(inverse_mask, inverse_affine_matrix, (temp_frame_width, temp_frame_height))
inverse_mask_frame = cv2.erode(inverse_mask_frame, numpy.ones((2, 2)))
inverse_mask_border = inverse_mask_frame * inverse_crop_frame
inverse_mask_area = numpy.sum(inverse_mask_frame) // 3
inverse_mask_edge = int(inverse_mask_area ** 0.5) // 20
inverse_mask_radius = inverse_mask_edge * 2
inverse_mask_center = cv2.erode(inverse_mask_frame, numpy.ones((inverse_mask_radius, inverse_mask_radius)))
inverse_mask_blur_size = inverse_mask_edge * 2 + 1
inverse_mask_blur_area = cv2.GaussianBlur(inverse_mask_center, (inverse_mask_blur_size, inverse_mask_blur_size), 0)
temp_frame = inverse_mask_blur_area * inverse_mask_border + (1 - inverse_mask_blur_area) * temp_frame
temp_frame = temp_frame.clip(0, 255).astype(numpy.uint8)
return temp_frame
def blend_frame(temp_frame : Frame, paste_frame : Frame) -> Frame:
face_enhancer_blend = 1 - (frame_processors_globals.face_enhancer_blend / 100)
temp_frame = cv2.addWeighted(temp_frame, face_enhancer_blend, paste_frame, 1 - face_enhancer_blend, 0)
return temp_frame
@ -93,7 +208,7 @@ def process_frame(source_face : Face, reference_face : Face, temp_frame : Frame)
return temp_frame
def process_frames(source_path : str, temp_frame_paths : List[str], update_progress: Callable[[], None]) -> None:
def process_frames(source_path : str, temp_frame_paths : List[str], update_progress : Update_Process) -> None:
for temp_frame_path in temp_frame_paths:
temp_frame = read_image(temp_frame_path)
result_frame = process_frame(None, None, temp_frame)

View File

@ -1,4 +1,5 @@
from typing import Any, List, Callable
from typing import Any, List, Dict, Literal, Optional
from argparse import ArgumentParser
import insightface
import threading
@ -8,15 +9,29 @@ from facefusion import wording
from facefusion.core import update_status
from facefusion.face_analyser import get_one_face, get_many_faces, find_similar_faces, clear_face_analyser
from facefusion.face_reference import get_face_reference, set_face_reference
from facefusion.typing import Face, Frame, ProcessMode
from facefusion.typing import Face, Frame, Update_Process, ProcessMode, ModelValue, OptionsWithModel
from facefusion.utilities import conditional_download, resolve_relative_path, is_image, is_video, is_file, is_download_done
from facefusion.vision import read_image, read_static_image, write_image
from facefusion.processors.frame import globals as frame_processors_globals
from facefusion.processors.frame import choices as frame_processors_choices
FRAME_PROCESSOR = None
THREAD_LOCK : threading.Lock = threading.Lock()
NAME = 'FACEFUSION.FRAME_PROCESSOR.FACE_SWAPPER'
MODEL_URL = 'https://github.com/facefusion/facefusion-assets/releases/download/models/inswapper_128.onnx'
MODEL_PATH = resolve_relative_path('../.assets/models/inswapper_128.onnx')
MODELS : Dict[str, ModelValue] =\
{
'inswapper_128':
{
'url': 'https://github.com/facefusion/facefusion-assets/releases/download/models/inswapper_128.onnx',
'path': resolve_relative_path('../.assets/models/inswapper_128.onnx')
},
'inswapper_128_fp16':
{
'url': 'https://github.com/facefusion/facefusion-assets/releases/download/models/inswapper_128_fp16.onnx',
'path': resolve_relative_path('../.assets/models/inswapper_128_fp16.onnx')
}
}
OPTIONS : Optional[OptionsWithModel] = None
def get_frame_processor() -> Any:
@ -24,7 +39,8 @@ def get_frame_processor() -> Any:
with THREAD_LOCK:
if FRAME_PROCESSOR is None:
FRAME_PROCESSOR = insightface.model_zoo.get_model(MODEL_PATH, providers = facefusion.globals.execution_providers)
model_path = get_options('model').get('path')
FRAME_PROCESSOR = insightface.model_zoo.get_model(model_path, providers = facefusion.globals.execution_providers)
return FRAME_PROCESSOR
@ -34,18 +50,47 @@ def clear_frame_processor() -> None:
FRAME_PROCESSOR = None
def get_options(key : Literal[ 'model' ]) -> Any:
global OPTIONS
if OPTIONS is None:
OPTIONS = \
{
'model': MODELS[frame_processors_globals.face_swapper_model]
}
return OPTIONS.get(key)
def set_options(key : Literal[ 'model' ], value : Any) -> None:
global OPTIONS
OPTIONS[key] = value
def register_args(program : ArgumentParser) -> None:
program.add_argument('--face-swapper-model', help = wording.get('frame_processor_model_help'), dest = 'face_swapper_model', default = 'inswapper_128', choices = frame_processors_choices.face_swapper_models)
def apply_args(program : ArgumentParser) -> None:
args = program.parse_args()
frame_processors_globals.face_swapper_model = args.face_swapper_model
def pre_check() -> bool:
if not facefusion.globals.skip_download:
download_directory_path = resolve_relative_path('../.assets/models')
conditional_download(download_directory_path, [ MODEL_URL ])
model_url = get_options('model').get('url')
conditional_download(download_directory_path, [ model_url ])
return True
def pre_process(mode : ProcessMode) -> bool:
if not facefusion.globals.skip_download and not is_download_done(MODEL_URL, MODEL_PATH):
model_url = get_options('model').get('url')
model_path = get_options('model').get('path')
if not facefusion.globals.skip_download and not is_download_done(model_url, model_path):
update_status(wording.get('model_download_not_done') + wording.get('exclamation_mark'), NAME)
return False
elif not is_file(MODEL_PATH):
elif not is_file(model_path):
update_status(wording.get('model_file_not_present') + wording.get('exclamation_mark'), NAME)
return False
if not is_image(facefusion.globals.source_path):
@ -87,7 +132,7 @@ def process_frame(source_face : Face, reference_face : Face, temp_frame : Frame)
return temp_frame
def process_frames(source_path : str, temp_frame_paths : List[str], update_progress: Callable[[], None]) -> None:
def process_frames(source_path : str, temp_frame_paths : List[str], update_progress : Update_Process) -> None:
source_face = get_one_face(read_static_image(source_path))
reference_face = get_face_reference() if 'reference' in facefusion.globals.face_recognition else None
for temp_frame_path in temp_frame_paths:

View File

@ -1,23 +1,47 @@
from typing import Any, List, Callable
from typing import Any, List, Dict, Literal, Optional
from argparse import ArgumentParser
import threading
import cv2
from basicsr.archs.rrdbnet_arch import RRDBNet
from realesrgan import RealESRGANer
import facefusion.globals
import facefusion.processors.frame.core as frame_processors
from facefusion import wording, utilities
from facefusion import wording
from facefusion.core import update_status
from facefusion.face_analyser import clear_face_analyser
from facefusion.typing import Frame, Face, ProcessMode
from facefusion.utilities import conditional_download, resolve_relative_path, is_file, is_download_done
from facefusion.typing import Frame, Face, Update_Process, ProcessMode, ModelValue, OptionsWithModel
from facefusion.utilities import conditional_download, resolve_relative_path, is_file, is_download_done, get_device
from facefusion.vision import read_image, read_static_image, write_image
from facefusion.processors.frame import globals as frame_processors_globals
from facefusion.processors.frame import choices as frame_processors_choices
FRAME_PROCESSOR = None
THREAD_SEMAPHORE : threading.Semaphore = threading.Semaphore()
THREAD_LOCK : threading.Lock = threading.Lock()
NAME = 'FACEFUSION.FRAME_PROCESSOR.FRAME_ENHANCER'
MODEL_URL = 'https://github.com/facefusion/facefusion-assets/releases/download/models/RealESRGAN_x4plus.pth'
MODEL_PATH = resolve_relative_path('../.assets/models/RealESRGAN_x4plus.pth')
MODELS: Dict[str, ModelValue] =\
{
'realesrgan_x2plus':
{
'url': 'https://github.com/facefusion/facefusion-assets/releases/download/models/RealESRGAN_x2plus.pth',
'path': resolve_relative_path('../.assets/models/RealESRGAN_x2plus.pth'),
'scale': 2
},
'realesrgan_x4plus':
{
'url': 'https://github.com/facefusion/facefusion-assets/releases/download/models/RealESRGAN_x4plus.pth',
'path': resolve_relative_path('../.assets/models/RealESRGAN_x4plus.pth'),
'scale': 4
},
'realesrnet_x4plus':
{
'url': 'https://github.com/facefusion/facefusion-assets/releases/download/models/RealESRNet_x4plus.pth',
'path': resolve_relative_path('../.assets/models/RealESRNet_x4plus.pth'),
'scale': 4
}
}
OPTIONS : Optional[OptionsWithModel] = None
def get_frame_processor() -> Any:
@ -25,21 +49,17 @@ def get_frame_processor() -> Any:
with THREAD_LOCK:
if FRAME_PROCESSOR is None:
model_path = get_options('model').get('path')
model_scale = get_options('model').get('scale')
FRAME_PROCESSOR = RealESRGANer(
model_path = MODEL_PATH,
model_path = model_path,
model = RRDBNet(
num_in_ch = 3,
num_out_ch = 3,
num_feat = 64,
num_block = 23,
num_grow_ch = 32,
scale = 4
scale = model_scale
),
device = utilities.get_device(facefusion.globals.execution_providers),
tile = 512,
tile_pad = 32,
pre_pad = 0,
scale = 4
device = get_device(facefusion.globals.execution_providers),
scale = model_scale
)
return FRAME_PROCESSOR
@ -50,18 +70,49 @@ def clear_frame_processor() -> None:
FRAME_PROCESSOR = None
def get_options(key : Literal[ 'model' ]) -> Any:
global OPTIONS
if OPTIONS is None:
OPTIONS = \
{
'model': MODELS[frame_processors_globals.frame_enhancer_model]
}
return OPTIONS.get(key)
def set_options(key : Literal[ 'model' ], value : Any) -> None:
global OPTIONS
OPTIONS[key] = value
def register_args(program : ArgumentParser) -> None:
program.add_argument('--frame-enhancer-model', help = wording.get('frame_processor_model_help'), dest = 'frame_enhancer_model', default = 'realesrgan_x2plus', choices = frame_processors_choices.frame_enhancer_models)
program.add_argument('--frame-enhancer-blend', help = wording.get('frame_processor_blend_help'), dest = 'frame_enhancer_blend', type = int, default = 100, choices = range(101), metavar = '[0-100]')
def apply_args(program : ArgumentParser) -> None:
args = program.parse_args()
frame_processors_globals.frame_enhancer_model = args.frame_enhancer_model
frame_processors_globals.frame_enhancer_blend = args.frame_enhancer_blend
def pre_check() -> bool:
if not facefusion.globals.skip_download:
download_directory_path = resolve_relative_path('../.assets/models')
conditional_download(download_directory_path, [ MODEL_URL ])
model_url = get_options('model').get('url')
conditional_download(download_directory_path, [ model_url ])
return True
def pre_process(mode : ProcessMode) -> bool:
if not facefusion.globals.skip_download and not is_download_done(MODEL_URL, MODEL_PATH):
model_url = get_options('model').get('url')
model_path = get_options('model').get('path')
if not facefusion.globals.skip_download and not is_download_done(model_url, model_path):
update_status(wording.get('model_download_not_done') + wording.get('exclamation_mark'), NAME)
return False
elif not is_file(MODEL_PATH):
elif not is_file(model_path):
update_status(wording.get('model_file_not_present') + wording.get('exclamation_mark'), NAME)
return False
if mode == 'output' and not facefusion.globals.output_path:
@ -78,7 +129,15 @@ def post_process() -> None:
def enhance_frame(temp_frame : Frame) -> Frame:
with THREAD_SEMAPHORE:
temp_frame, _ = get_frame_processor().enhance(temp_frame, outscale = 1)
paste_frame, _ = get_frame_processor().enhance(temp_frame)
temp_frame = blend_frame(temp_frame, paste_frame)
return temp_frame
def blend_frame(temp_frame : Frame, paste_frame : Frame) -> Frame:
frame_enhancer_blend = 1 - (frame_processors_globals.frame_enhancer_blend / 100)
temp_frame = cv2.resize(temp_frame, (paste_frame.shape[1], paste_frame.shape[0]))
temp_frame = cv2.addWeighted(temp_frame, frame_enhancer_blend, paste_frame, 1 - frame_enhancer_blend, 0)
return temp_frame
@ -86,7 +145,7 @@ def process_frame(source_face : Face, reference_face : Face, temp_frame : Frame)
return enhance_frame(temp_frame)
def process_frames(source_path : str, temp_frame_paths : List[str], update_progress: Callable[[], None]) -> None:
def process_frames(source_path : str, temp_frame_paths : List[str], update_progress : Update_Process) -> None:
for temp_frame_path in temp_frame_paths:
temp_frame = read_image(temp_frame_path)
result_frame = process_frame(None, None, temp_frame)

View File

@ -1,9 +1,13 @@
from typing import Any, Literal
from typing import Any, Literal, Callable, List, TypedDict, Dict
from insightface.app.common import Face
import numpy
Face = Face
Frame = numpy.ndarray[Any, Any]
Matrix = numpy.ndarray[Any, Any]
Update_Process = Callable[[], None]
Process_Frames = Callable[[str, List[str], Update_Process], None]
ProcessMode = Literal[ 'output', 'preview', 'stream' ]
FaceRecognition = Literal[ 'reference', 'many' ]
@ -12,3 +16,9 @@ FaceAnalyserAge = Literal[ 'child', 'teen', 'adult', 'senior' ]
FaceAnalyserGender = Literal[ 'male', 'female' ]
TempFrameFormat = Literal[ 'jpg', 'png' ]
OutputVideoEncoder = Literal[ 'libx264', 'libx265', 'libvpx-vp9', 'h264_nvenc', 'hevc_nvenc' ]
ModelValue = Dict['str', Any]
OptionsWithModel = TypedDict('OptionsWithModel',
{
'model' : ModelValue
})

View File

@ -0,0 +1,7 @@
:root:root:root button:not([class])
{
border-radius: 0.375rem;
float: left;
overflow: hidden;
width: 100%;
}

View File

@ -0,0 +1,32 @@
:root:root:root input[type="number"]
{
max-width: 6rem;
}
:root:root:root [type="checkbox"],
:root:root:root [type="radio"]
{
border-radius: 50%;
height: 1.125rem;
width: 1.125rem;
}
:root:root:root input[type="range"]
{
height: 0.5rem;
}
:root:root:root input[type="range"]::-moz-range-thumb,
:root:root:root input[type="range"]::-webkit-slider-thumb
{
background: var(--neutral-300);
border: unset;
border-radius: 50%;
height: 1.125rem;
width: 1.125rem;
}
:root:root:root input[type="range"]::-webkit-slider-thumb
{
margin-top: 0.375rem;
}

View File

@ -2,6 +2,6 @@ from typing import List
from facefusion.uis.typing import WebcamMode
settings : List[str] = [ 'keep-fps', 'keep-temp', 'skip-audio', 'skip-download' ]
webcam_mode : List[WebcamMode] = [ 'inline', 'stream_udp', 'stream_v4l2' ]
webcam_resolution : List[str] = [ '320x240', '640x480', '1280x720', '1920x1080', '2560x1440', '3840x2160' ]
common_options : List[str] = [ 'keep-fps', 'keep-temp', 'skip-audio', 'skip-download' ]
webcam_modes : List[WebcamMode] = [ 'inline', 'udp', 'v4l2' ]
webcam_resolutions : List[str] = [ '320x240', '640x480', '1280x720', '1920x1080', '2560x1440', '3840x2160' ]

View File

@ -1,12 +1,23 @@
from typing import Optional
import gradio
from facefusion import metadata
from facefusion import metadata, wording
ABOUT_HTML : Optional[gradio.HTML] = None
ABOUT_BUTTON : Optional[gradio.HTML] = None
DONATE_BUTTON : Optional[gradio.HTML] = None
def render() -> None:
global ABOUT_HTML
global ABOUT_BUTTON
global DONATE_BUTTON
ABOUT_HTML = gradio.HTML('<center><a href="' + metadata.get('url') + '">' + metadata.get('name') + ' ' + metadata.get('version') + '</a></center>')
ABOUT_BUTTON = gradio.Button(
value = metadata.get('name') + ' ' + metadata.get('version'),
variant = 'primary',
link = metadata.get('url')
)
DONATE_BUTTON = gradio.Button(
value = wording.get('donate_button_label'),
link = 'https://donate.facefusion.io',
size = 'sm'
)

View File

@ -11,9 +11,8 @@ from facefusion.face_cache import clear_faces_cache
from facefusion.processors.frame.core import get_frame_processors_modules
from facefusion.vision import count_video_frame_total
from facefusion.core import limit_resources, conditional_process
from facefusion.uis.typing import Update
from facefusion.uis import core as ui
from facefusion.utilities import normalize_output_path, clear_temp
from facefusion.uis.core import get_ui_component
BENCHMARK_RESULTS_DATAFRAME : Optional[gradio.Dataframe] = None
BENCHMARK_START_BUTTON : Optional[gradio.Button] = None
@ -58,16 +57,18 @@ def render() -> None:
)
BENCHMARK_START_BUTTON = gradio.Button(
value = wording.get('start_button_label'),
variant = 'primary'
variant = 'primary',
size = 'sm'
)
BENCHMARK_CLEAR_BUTTON = gradio.Button(
value = wording.get('clear_button_label')
value = wording.get('clear_button_label'),
size = 'sm'
)
def listen() -> None:
benchmark_runs_checkbox_group = ui.get_component('benchmark_runs_checkbox_group')
benchmark_cycles_slider = ui.get_component('benchmark_cycles_slider')
benchmark_runs_checkbox_group = get_ui_component('benchmark_runs_checkbox_group')
benchmark_cycles_slider = get_ui_component('benchmark_cycles_slider')
if benchmark_runs_checkbox_group and benchmark_cycles_slider:
BENCHMARK_START_BUTTON.click(start, inputs = [ benchmark_runs_checkbox_group, benchmark_cycles_slider ], outputs = BENCHMARK_RESULTS_DATAFRAME)
BENCHMARK_CLEAR_BUTTON.click(clear, outputs = BENCHMARK_RESULTS_DATAFRAME)
@ -124,7 +125,7 @@ def benchmark(target_path : str, benchmark_cycles : int) -> List[Any]:
]
def clear() -> Update:
def clear() -> gradio.Dataframe:
if facefusion.globals.target_path:
clear_temp(facefusion.globals.target_path)
return gradio.update(value = None)
return gradio.Dataframe(value = None)

View File

@ -1,9 +1,8 @@
from typing import Optional, List
from typing import Optional
import gradio
from facefusion import wording
from facefusion.uis.typing import Update
from facefusion.uis import core as ui
from facefusion.uis.core import register_ui_component
from facefusion.uis.components.benchmark import BENCHMARKS
BENCHMARK_RUNS_CHECKBOX_GROUP : Optional[gradio.CheckboxGroup] = None
@ -21,18 +20,10 @@ def render() -> None:
)
BENCHMARK_CYCLES_SLIDER = gradio.Slider(
label = wording.get('benchmark_cycles_slider_label'),
minimum = 1,
step = 1,
value = 3,
step = 1,
minimum = 1,
maximum = 10
)
ui.register_component('benchmark_runs_checkbox_group', BENCHMARK_RUNS_CHECKBOX_GROUP)
ui.register_component('benchmark_cycles_slider', BENCHMARK_CYCLES_SLIDER)
def listen() -> None:
BENCHMARK_RUNS_CHECKBOX_GROUP.change(update_benchmark_runs, inputs = BENCHMARK_RUNS_CHECKBOX_GROUP, outputs = BENCHMARK_RUNS_CHECKBOX_GROUP)
def update_benchmark_runs(benchmark_runs : List[str]) -> Update:
return gradio.update(value = benchmark_runs)
register_ui_component('benchmark_runs_checkbox_group', BENCHMARK_RUNS_CHECKBOX_GROUP)
register_ui_component('benchmark_cycles_slider', BENCHMARK_CYCLES_SLIDER)

View File

@ -0,0 +1,38 @@
from typing import Optional, List
import gradio
import facefusion.globals
from facefusion import wording
from facefusion.uis import choices
COMMON_OPTIONS_CHECKBOX_GROUP : Optional[gradio.Checkboxgroup] = None
def render() -> None:
global COMMON_OPTIONS_CHECKBOX_GROUP
value = []
if facefusion.globals.keep_fps:
value.append('keep-fps')
if facefusion.globals.keep_temp:
value.append('keep-temp')
if facefusion.globals.skip_audio:
value.append('skip-audio')
if facefusion.globals.skip_download:
value.append('skip-download')
COMMON_OPTIONS_CHECKBOX_GROUP = gradio.Checkboxgroup(
label = wording.get('common_options_checkbox_group_label'),
choices = choices.common_options,
value = value
)
def listen() -> None:
COMMON_OPTIONS_CHECKBOX_GROUP.change(update, inputs = COMMON_OPTIONS_CHECKBOX_GROUP)
def update(common_options : List[str]) -> None:
facefusion.globals.keep_fps = 'keep-fps' in common_options
facefusion.globals.keep_temp = 'keep-temp' in common_options
facefusion.globals.skip_audio = 'skip-audio' in common_options
facefusion.globals.skip_download = 'skip-download' in common_options

View File

@ -6,7 +6,6 @@ import facefusion.globals
from facefusion import wording
from facefusion.face_analyser import clear_face_analyser
from facefusion.processors.frame.core import clear_frame_processors_modules
from facefusion.uis.typing import Update
from facefusion.utilities import encode_execution_providers, decode_execution_providers
EXECUTION_PROVIDERS_CHECKBOX_GROUP : Optional[gradio.CheckboxGroup] = None
@ -26,10 +25,10 @@ def listen() -> None:
EXECUTION_PROVIDERS_CHECKBOX_GROUP.change(update_execution_providers, inputs = EXECUTION_PROVIDERS_CHECKBOX_GROUP, outputs = EXECUTION_PROVIDERS_CHECKBOX_GROUP)
def update_execution_providers(execution_providers : List[str]) -> Update:
def update_execution_providers(execution_providers : List[str]) -> gradio.CheckboxGroup:
clear_face_analyser()
clear_frame_processors_modules()
if not execution_providers:
execution_providers = encode_execution_providers(onnxruntime.get_available_providers())
facefusion.globals.execution_providers = decode_execution_providers(execution_providers)
return gradio.update(value = execution_providers)
return gradio.CheckboxGroup(value = execution_providers)

View File

@ -3,7 +3,6 @@ import gradio
import facefusion.globals
from facefusion import wording
from facefusion.uis.typing import Update
EXECUTION_QUEUE_COUNT_SLIDER : Optional[gradio.Slider] = None
@ -21,9 +20,9 @@ def render() -> None:
def listen() -> None:
EXECUTION_QUEUE_COUNT_SLIDER.change(update_execution_queue_count, inputs = EXECUTION_QUEUE_COUNT_SLIDER, outputs = EXECUTION_QUEUE_COUNT_SLIDER)
EXECUTION_QUEUE_COUNT_SLIDER.change(update_execution_queue_count, inputs = EXECUTION_QUEUE_COUNT_SLIDER)
def update_execution_queue_count(execution_queue_count : int = 1) -> Update:
def update_execution_queue_count(execution_queue_count : int = 1) -> None:
facefusion.globals.execution_queue_count = execution_queue_count
return gradio.update(value = execution_queue_count)

View File

@ -3,7 +3,6 @@ import gradio
import facefusion.globals
from facefusion import wording
from facefusion.uis.typing import Update
EXECUTION_THREAD_COUNT_SLIDER : Optional[gradio.Slider] = None
@ -21,9 +20,9 @@ def render() -> None:
def listen() -> None:
EXECUTION_THREAD_COUNT_SLIDER.change(update_execution_thread_count, inputs = EXECUTION_THREAD_COUNT_SLIDER, outputs = EXECUTION_THREAD_COUNT_SLIDER)
EXECUTION_THREAD_COUNT_SLIDER.change(update_execution_thread_count, inputs = EXECUTION_THREAD_COUNT_SLIDER)
def update_execution_thread_count(execution_thread_count : int = 1) -> Update:
def update_execution_thread_count(execution_thread_count : int = 1) -> None:
facefusion.globals.execution_thread_count = execution_thread_count
return gradio.update(value = execution_thread_count)

View File

@ -5,8 +5,7 @@ import gradio
import facefusion.choices
import facefusion.globals
from facefusion import wording
from facefusion.uis import core as ui
from facefusion.uis.typing import Update
from facefusion.uis.core import register_ui_component
FACE_ANALYSER_DIRECTION_DROPDOWN : Optional[gradio.Dropdown] = None
FACE_ANALYSER_AGE_DROPDOWN : Optional[gradio.Dropdown] = None
@ -20,33 +19,32 @@ def render() -> None:
FACE_ANALYSER_DIRECTION_DROPDOWN = gradio.Dropdown(
label = wording.get('face_analyser_direction_dropdown_label'),
choices = facefusion.choices.face_analyser_direction,
choices = facefusion.choices.face_analyser_directions,
value = facefusion.globals.face_analyser_direction
)
FACE_ANALYSER_AGE_DROPDOWN = gradio.Dropdown(
label = wording.get('face_analyser_age_dropdown_label'),
choices = ['none'] + facefusion.choices.face_analyser_age,
choices = ['none'] + facefusion.choices.face_analyser_ages,
value = facefusion.globals.face_analyser_age or 'none'
)
FACE_ANALYSER_GENDER_DROPDOWN = gradio.Dropdown(
label = wording.get('face_analyser_gender_dropdown_label'),
choices = ['none'] + facefusion.choices.face_analyser_gender,
choices = ['none'] + facefusion.choices.face_analyser_genders,
value = facefusion.globals.face_analyser_gender or 'none'
)
ui.register_component('face_analyser_direction_dropdown', FACE_ANALYSER_DIRECTION_DROPDOWN)
ui.register_component('face_analyser_age_dropdown', FACE_ANALYSER_AGE_DROPDOWN)
ui.register_component('face_analyser_gender_dropdown', FACE_ANALYSER_GENDER_DROPDOWN)
register_ui_component('face_analyser_direction_dropdown', FACE_ANALYSER_DIRECTION_DROPDOWN)
register_ui_component('face_analyser_age_dropdown', FACE_ANALYSER_AGE_DROPDOWN)
register_ui_component('face_analyser_gender_dropdown', FACE_ANALYSER_GENDER_DROPDOWN)
def listen() -> None:
FACE_ANALYSER_DIRECTION_DROPDOWN.select(lambda value: update_dropdown('face_analyser_direction', value), inputs = FACE_ANALYSER_DIRECTION_DROPDOWN, outputs = FACE_ANALYSER_DIRECTION_DROPDOWN)
FACE_ANALYSER_AGE_DROPDOWN.select(lambda value: update_dropdown('face_analyser_age', value), inputs = FACE_ANALYSER_AGE_DROPDOWN, outputs = FACE_ANALYSER_AGE_DROPDOWN)
FACE_ANALYSER_GENDER_DROPDOWN.select(lambda value: update_dropdown('face_analyser_gender', value), inputs = FACE_ANALYSER_GENDER_DROPDOWN, outputs = FACE_ANALYSER_GENDER_DROPDOWN)
FACE_ANALYSER_DIRECTION_DROPDOWN.select(lambda value: update_dropdown('face_analyser_direction', value), inputs = FACE_ANALYSER_DIRECTION_DROPDOWN)
FACE_ANALYSER_AGE_DROPDOWN.select(lambda value: update_dropdown('face_analyser_age', value), inputs = FACE_ANALYSER_AGE_DROPDOWN)
FACE_ANALYSER_GENDER_DROPDOWN.select(lambda value: update_dropdown('face_analyser_gender', value), inputs = FACE_ANALYSER_GENDER_DROPDOWN)
def update_dropdown(name : str, value : str) -> Update:
def update_dropdown(name : str, value : str) -> None:
if value == 'none':
setattr(facefusion.globals, name, None)
else:
setattr(facefusion.globals, name, value)
return gradio.update(value = value)

View File

@ -9,9 +9,9 @@ from facefusion.vision import get_video_frame, normalize_frame_color, read_stati
from facefusion.face_analyser import get_many_faces
from facefusion.face_reference import clear_face_reference
from facefusion.typing import Frame, FaceRecognition
from facefusion.uis import core as ui
from facefusion.uis.typing import ComponentName, Update
from facefusion.utilities import is_image, is_video
from facefusion.uis.core import get_ui_component, register_ui_component
from facefusion.uis.typing import ComponentName
FACE_RECOGNITION_DROPDOWN : Optional[gradio.Dropdown] = None
REFERENCE_FACE_POSITION_GALLERY : Optional[gradio.Gallery] = None
@ -40,20 +40,21 @@ def render() -> None:
reference_face_gallery_args['value'] = extract_gallery_frames(reference_frame)
FACE_RECOGNITION_DROPDOWN = gradio.Dropdown(
label = wording.get('face_recognition_dropdown_label'),
choices = facefusion.choices.face_recognition,
choices = facefusion.choices.face_recognitions,
value = facefusion.globals.face_recognition
)
REFERENCE_FACE_POSITION_GALLERY = gradio.Gallery(**reference_face_gallery_args)
REFERENCE_FACE_DISTANCE_SLIDER = gradio.Slider(
label = wording.get('reference_face_distance_slider_label'),
value = facefusion.globals.reference_face_distance,
maximum = 3,
step = 0.05,
minimum = 0,
maximum = 3,
visible = 'reference' in facefusion.globals.face_recognition
)
ui.register_component('face_recognition_dropdown', FACE_RECOGNITION_DROPDOWN)
ui.register_component('reference_face_position_gallery', REFERENCE_FACE_POSITION_GALLERY)
ui.register_component('reference_face_distance_slider', REFERENCE_FACE_DISTANCE_SLIDER)
register_ui_component('face_recognition_dropdown', FACE_RECOGNITION_DROPDOWN)
register_ui_component('reference_face_position_gallery', REFERENCE_FACE_POSITION_GALLERY)
register_ui_component('reference_face_distance_slider', REFERENCE_FACE_DISTANCE_SLIDER)
def listen() -> None:
@ -67,7 +68,7 @@ def listen() -> None:
'target_video'
]
for component_name in multi_component_names:
component = ui.get_component(component_name)
component = get_ui_component(component_name)
if component:
for method in [ 'upload', 'change', 'clear' ]:
getattr(component, method)(update_face_reference_position, outputs = REFERENCE_FACE_POSITION_GALLERY)
@ -78,29 +79,29 @@ def listen() -> None:
'face_analyser_gender_dropdown'
]
for component_name in select_component_names:
component = ui.get_component(component_name)
component = get_ui_component(component_name)
if component:
component.select(update_face_reference_position, outputs = REFERENCE_FACE_POSITION_GALLERY)
preview_frame_slider = ui.get_component('preview_frame_slider')
preview_frame_slider = get_ui_component('preview_frame_slider')
if preview_frame_slider:
preview_frame_slider.release(update_face_reference_position, outputs = REFERENCE_FACE_POSITION_GALLERY)
def update_face_recognition(face_recognition : FaceRecognition) -> Tuple[Update, Update]:
def update_face_recognition(face_recognition : FaceRecognition) -> Tuple[gradio.Gallery, gradio.Slider]:
if face_recognition == 'reference':
facefusion.globals.face_recognition = face_recognition
return gradio.update(visible = True), gradio.update(visible = True)
return gradio.Gallery(visible = True), gradio.Slider(visible = True)
if face_recognition == 'many':
facefusion.globals.face_recognition = face_recognition
return gradio.update(visible = False), gradio.update(visible = False)
return gradio.Gallery(visible = False), gradio.Slider(visible = False)
def clear_and_update_face_reference_position(event: gradio.SelectData) -> Update:
def clear_and_update_face_reference_position(event: gradio.SelectData) -> gradio.Gallery:
clear_face_reference()
return update_face_reference_position(event.index)
def update_face_reference_position(reference_face_position : int = 0) -> Update:
def update_face_reference_position(reference_face_position : int = 0) -> gradio.Gallery:
gallery_frames = []
facefusion.globals.reference_face_position = reference_face_position
if is_image(facefusion.globals.target_path):
@ -110,13 +111,12 @@ def update_face_reference_position(reference_face_position : int = 0) -> Update:
reference_frame = get_video_frame(facefusion.globals.target_path, facefusion.globals.reference_frame_number)
gallery_frames = extract_gallery_frames(reference_frame)
if gallery_frames:
return gradio.update(value = gallery_frames)
return gradio.update(value = None)
return gradio.Gallery(value = gallery_frames)
return gradio.Gallery(value = None)
def update_reference_face_distance(reference_face_distance : float) -> Update:
def update_reference_face_distance(reference_face_distance : float) -> None:
facefusion.globals.reference_face_distance = reference_face_distance
return gradio.update(value = reference_face_distance)
def extract_gallery_frames(reference_frame : Frame) -> List[Frame]:

View File

@ -4,9 +4,8 @@ import gradio
import facefusion.globals
from facefusion import wording
from facefusion.processors.frame.core import load_frame_processor_module, clear_frame_processors_modules
from facefusion.uis import core as ui
from facefusion.uis.typing import Update
from facefusion.utilities import list_module_names
from facefusion.uis.core import register_ui_component
FRAME_PROCESSORS_CHECKBOX_GROUP : Optional[gradio.CheckboxGroup] = None
@ -19,23 +18,23 @@ def render() -> None:
choices = sort_frame_processors(facefusion.globals.frame_processors),
value = facefusion.globals.frame_processors
)
ui.register_component('frame_processors_checkbox_group', FRAME_PROCESSORS_CHECKBOX_GROUP)
register_ui_component('frame_processors_checkbox_group', FRAME_PROCESSORS_CHECKBOX_GROUP)
def listen() -> None:
FRAME_PROCESSORS_CHECKBOX_GROUP.change(update_frame_processors, inputs = FRAME_PROCESSORS_CHECKBOX_GROUP, outputs = FRAME_PROCESSORS_CHECKBOX_GROUP)
def update_frame_processors(frame_processors : List[str]) -> Update:
clear_frame_processors_modules()
def update_frame_processors(frame_processors : List[str]) -> gradio.CheckboxGroup:
facefusion.globals.frame_processors = frame_processors
clear_frame_processors_modules()
for frame_processor in frame_processors:
frame_processor_module = load_frame_processor_module(frame_processor)
if not frame_processor_module.pre_check():
return gradio.update()
return gradio.update(value = frame_processors, choices = sort_frame_processors(frame_processors))
return gradio.CheckboxGroup()
return gradio.CheckboxGroup(value = frame_processors, choices = sort_frame_processors(frame_processors))
def sort_frame_processors(frame_processors : List[str]) -> list[str]:
frame_processors_names = list_module_names('facefusion/processors/frame/modules')
return sorted(frame_processors_names, key = lambda frame_processor : frame_processors.index(frame_processor) if frame_processor in frame_processors else len(frame_processors))
available_frame_processors = list_module_names('facefusion/processors/frame/modules')
return sorted(available_frame_processors, key = lambda frame_processor : frame_processors.index(frame_processor) if frame_processor in frame_processors else len(frame_processors))

View File

@ -0,0 +1,118 @@
from typing import List, Optional, Tuple
import gradio
import facefusion.globals
from facefusion import wording
from facefusion.processors.frame.core import load_frame_processor_module
from facefusion.processors.frame import globals as frame_processors_globals, choices as frame_processors_choices
from facefusion.uis.core import get_ui_component, register_ui_component
FACE_SWAPPER_MODEL_DROPDOWN : Optional[gradio.Dropdown] = None
FACE_ENHANCER_MODEL_DROPDOWN : Optional[gradio.Dropdown] = None
FACE_ENHANCER_BLEND_SLIDER : Optional[gradio.Slider] = None
FRAME_ENHANCER_MODEL_DROPDOWN : Optional[gradio.Dropdown] = None
FRAME_ENHANCER_BLEND_SLIDER : Optional[gradio.Slider] = None
def render() -> None:
global FACE_SWAPPER_MODEL_DROPDOWN
global FACE_ENHANCER_MODEL_DROPDOWN
global FACE_ENHANCER_BLEND_SLIDER
global FRAME_ENHANCER_MODEL_DROPDOWN
global FRAME_ENHANCER_BLEND_SLIDER
FACE_SWAPPER_MODEL_DROPDOWN = gradio.Dropdown(
label = wording.get('face_swapper_model_dropdown_label'),
choices = frame_processors_choices.face_swapper_models,
value = frame_processors_globals.face_swapper_model,
visible = 'face_swapper' in facefusion.globals.frame_processors
)
FACE_ENHANCER_MODEL_DROPDOWN = gradio.Dropdown(
label = wording.get('face_enhancer_model_dropdown_label'),
choices = frame_processors_choices.face_enhancer_models,
value = frame_processors_globals.face_enhancer_model,
visible = 'face_enhancer' in facefusion.globals.frame_processors
)
FACE_ENHANCER_BLEND_SLIDER = gradio.Slider(
label = wording.get('face_enhancer_blend_slider_label'),
value = frame_processors_globals.face_enhancer_blend,
step = 1,
minimum = 0,
maximum = 100,
visible = 'face_enhancer' in facefusion.globals.frame_processors
)
FRAME_ENHANCER_MODEL_DROPDOWN = gradio.Dropdown(
label = wording.get('frame_enhancer_model_dropdown_label'),
choices = frame_processors_choices.frame_enhancer_models,
value = frame_processors_globals.frame_enhancer_model,
visible = 'frame_enhancer' in facefusion.globals.frame_processors
)
FRAME_ENHANCER_BLEND_SLIDER = gradio.Slider(
label = wording.get('frame_enhancer_blend_slider_label'),
value = frame_processors_globals.frame_enhancer_blend,
step = 1,
minimum = 0,
maximum = 100,
visible = 'face_enhancer' in facefusion.globals.frame_processors
)
register_ui_component('face_swapper_model_dropdown', FACE_SWAPPER_MODEL_DROPDOWN)
register_ui_component('face_enhancer_model_dropdown', FACE_ENHANCER_MODEL_DROPDOWN)
register_ui_component('face_enhancer_blend_slider', FACE_ENHANCER_BLEND_SLIDER)
register_ui_component('frame_enhancer_model_dropdown', FRAME_ENHANCER_MODEL_DROPDOWN)
register_ui_component('frame_enhancer_blend_slider', FRAME_ENHANCER_BLEND_SLIDER)
def listen() -> None:
FACE_SWAPPER_MODEL_DROPDOWN.change(update_face_swapper_model, inputs = FACE_SWAPPER_MODEL_DROPDOWN, outputs = FACE_SWAPPER_MODEL_DROPDOWN)
FACE_ENHANCER_MODEL_DROPDOWN.change(update_face_enhancer_model, inputs = FACE_ENHANCER_MODEL_DROPDOWN, outputs = FACE_ENHANCER_MODEL_DROPDOWN)
FACE_ENHANCER_BLEND_SLIDER.change(update_face_enhancer_blend, inputs = FACE_ENHANCER_BLEND_SLIDER)
FRAME_ENHANCER_MODEL_DROPDOWN.change(update_frame_enhancer_model, inputs = FRAME_ENHANCER_MODEL_DROPDOWN, outputs = FRAME_ENHANCER_MODEL_DROPDOWN)
FRAME_ENHANCER_BLEND_SLIDER.change(update_frame_enhancer_blend, inputs = FRAME_ENHANCER_BLEND_SLIDER)
frame_processors_checkbox_group = get_ui_component('frame_processors_checkbox_group')
if frame_processors_checkbox_group:
frame_processors_checkbox_group.change(toggle_face_swapper_model, inputs = frame_processors_checkbox_group, outputs = [ FACE_SWAPPER_MODEL_DROPDOWN, FACE_ENHANCER_MODEL_DROPDOWN, FACE_ENHANCER_BLEND_SLIDER, FRAME_ENHANCER_MODEL_DROPDOWN, FRAME_ENHANCER_BLEND_SLIDER ])
def update_face_swapper_model(face_swapper_model : str) -> gradio.Dropdown:
frame_processors_globals.face_swapper_model = face_swapper_model
face_swapper_module = load_frame_processor_module('face_swapper')
face_swapper_module.clear_frame_processor()
face_swapper_module.set_options('model', face_swapper_module.MODELS[face_swapper_model])
if not face_swapper_module.pre_check():
return gradio.Dropdown()
return gradio.Dropdown(value = face_swapper_model)
def update_face_enhancer_model(face_enhancer_model : str) -> gradio.Dropdown:
frame_processors_globals.face_enhancer_model = face_enhancer_model
face_enhancer_module = load_frame_processor_module('face_enhancer')
face_enhancer_module.clear_frame_processor()
face_enhancer_module.set_options('model', face_enhancer_module.MODELS[face_enhancer_model])
if not face_enhancer_module.pre_check():
return gradio.Dropdown()
return gradio.Dropdown(value = face_enhancer_model)
def update_face_enhancer_blend(face_enhancer_blend : int) -> None:
frame_processors_globals.face_enhancer_blend = face_enhancer_blend
def update_frame_enhancer_model(frame_enhancer_model : str) -> gradio.Dropdown:
frame_processors_globals.frame_enhancer_model = frame_enhancer_model
frame_enhancer_module = load_frame_processor_module('frame_enhancer')
frame_enhancer_module.clear_frame_processor()
frame_enhancer_module.set_options('model', frame_enhancer_module.MODELS[frame_enhancer_model])
if not frame_enhancer_module.pre_check():
return gradio.Dropdown()
return gradio.Dropdown(value = frame_enhancer_model)
def update_frame_enhancer_blend(frame_enhancer_blend : int) -> None:
frame_processors_globals.frame_enhancer_blend = frame_enhancer_blend
def toggle_face_swapper_model(frame_processors : List[str]) -> Tuple[gradio.Dropdown, gradio.Dropdown, gradio.Slider, gradio.Dropdown, gradio.Slider]:
has_face_swapper = 'face_swapper' in frame_processors
has_face_enhancer = 'face_enhancer' in frame_processors
has_frame_enhancer = 'frame_enhancer' in frame_processors
return gradio.Dropdown(visible = has_face_swapper), gradio.Dropdown(visible = has_face_enhancer), gradio.Slider(visible = has_face_enhancer), gradio.Dropdown(visible = has_frame_enhancer), gradio.Slider(visible = has_frame_enhancer)

View File

@ -3,7 +3,6 @@ import gradio
import facefusion.globals
from facefusion import wording
from facefusion.uis.typing import Update
MAX_MEMORY_SLIDER : Optional[gradio.Slider] = None
@ -13,16 +12,15 @@ def render() -> None:
MAX_MEMORY_SLIDER = gradio.Slider(
label = wording.get('max_memory_slider_label'),
step = 1,
minimum = 0,
maximum = 128,
step = 1
maximum = 128
)
def listen() -> None:
MAX_MEMORY_SLIDER.change(update_max_memory, inputs = MAX_MEMORY_SLIDER, outputs = MAX_MEMORY_SLIDER)
MAX_MEMORY_SLIDER.change(update_max_memory, inputs = MAX_MEMORY_SLIDER)
def update_max_memory(max_memory : int) -> Update:
def update_max_memory(max_memory : int) -> None:
facefusion.globals.max_memory = max_memory if max_memory > 0 else None
return gradio.update(value = max_memory)

View File

@ -1,16 +1,14 @@
import tempfile
from typing import Tuple, Optional
import gradio
import facefusion.globals
from facefusion import wording
from facefusion.core import limit_resources, conditional_process
from facefusion.uis.typing import Update
from facefusion.uis.core import get_ui_component
from facefusion.utilities import is_image, is_video, normalize_output_path, clear_temp
OUTPUT_IMAGE : Optional[gradio.Image] = None
OUTPUT_VIDEO : Optional[gradio.Video] = None
OUTPUT_PATH_TEXTBOX : Optional[gradio.Textbox] = None
OUTPUT_START_BUTTON : Optional[gradio.Button] = None
OUTPUT_CLEAR_BUTTON : Optional[gradio.Button] = None
@ -18,7 +16,6 @@ OUTPUT_CLEAR_BUTTON : Optional[gradio.Button] = None
def render() -> None:
global OUTPUT_IMAGE
global OUTPUT_VIDEO
global OUTPUT_PATH_TEXTBOX
global OUTPUT_START_BUTTON
global OUTPUT_CLEAR_BUTTON
@ -29,43 +26,36 @@ def render() -> None:
OUTPUT_VIDEO = gradio.Video(
label = wording.get('output_image_or_video_label')
)
OUTPUT_PATH_TEXTBOX = gradio.Textbox(
label = wording.get('output_path_textbox_label'),
value = facefusion.globals.output_path or tempfile.gettempdir(),
max_lines = 1
)
OUTPUT_START_BUTTON = gradio.Button(
value = wording.get('start_button_label'),
variant = 'primary'
variant = 'primary',
size = 'sm'
)
OUTPUT_CLEAR_BUTTON = gradio.Button(
value = wording.get('clear_button_label'),
size = 'sm'
)
def listen() -> None:
OUTPUT_PATH_TEXTBOX.change(update_output_path, inputs = OUTPUT_PATH_TEXTBOX, outputs = OUTPUT_PATH_TEXTBOX)
OUTPUT_START_BUTTON.click(start, inputs = OUTPUT_PATH_TEXTBOX, outputs = [ OUTPUT_IMAGE, OUTPUT_VIDEO ])
output_path_textbox = get_ui_component('output_path_textbox')
if output_path_textbox:
OUTPUT_START_BUTTON.click(start, inputs = output_path_textbox, outputs = [ OUTPUT_IMAGE, OUTPUT_VIDEO ])
OUTPUT_CLEAR_BUTTON.click(clear, outputs = [ OUTPUT_IMAGE, OUTPUT_VIDEO ])
def start(output_path : str) -> Tuple[Update, Update]:
def start(output_path : str) -> Tuple[gradio.Image, gradio.Video]:
facefusion.globals.output_path = normalize_output_path(facefusion.globals.source_path, facefusion.globals.target_path, output_path)
limit_resources()
conditional_process()
if is_image(facefusion.globals.output_path):
return gradio.update(value = facefusion.globals.output_path, visible = True), gradio.update(value = None, visible = False)
return gradio.Image(value = facefusion.globals.output_path, visible = True), gradio.Video(value = None, visible = False)
if is_video(facefusion.globals.output_path):
return gradio.update(value = None, visible = False), gradio.update(value = facefusion.globals.output_path, visible = True)
return gradio.update(), gradio.update()
return gradio.Image(value = None, visible = False), gradio.Video(value = facefusion.globals.output_path, visible = True)
return gradio.Image(), gradio.Video()
def update_output_path(output_path : str) -> Update:
facefusion.globals.output_path = output_path
return gradio.update(value = output_path)
def clear() -> Tuple[Update, Update]:
def clear() -> Tuple[gradio.Image, gradio.Video]:
if facefusion.globals.target_path:
clear_temp(facefusion.globals.target_path)
return gradio.update(value = None), gradio.update(value = None)
return gradio.Image(value = None), gradio.Video(value = None)

View File

@ -1,33 +1,43 @@
from typing import Optional, Tuple, List
import tempfile
import gradio
import facefusion.choices
import facefusion.globals
from facefusion import wording
from facefusion.typing import OutputVideoEncoder
from facefusion.uis import core as ui
from facefusion.uis.typing import Update, ComponentName
from facefusion.utilities import is_image, is_video
from facefusion.uis.typing import ComponentName
from facefusion.uis.core import get_ui_component, register_ui_component
OUTPUT_PATH_TEXTBOX : Optional[gradio.Textbox] = None
OUTPUT_IMAGE_QUALITY_SLIDER : Optional[gradio.Slider] = None
OUTPUT_VIDEO_ENCODER_DROPDOWN : Optional[gradio.Dropdown] = None
OUTPUT_VIDEO_QUALITY_SLIDER : Optional[gradio.Slider] = None
def render() -> None:
global OUTPUT_PATH_TEXTBOX
global OUTPUT_IMAGE_QUALITY_SLIDER
global OUTPUT_VIDEO_ENCODER_DROPDOWN
global OUTPUT_VIDEO_QUALITY_SLIDER
OUTPUT_PATH_TEXTBOX = gradio.Textbox(
label = wording.get('output_path_textbox_label'),
value = facefusion.globals.output_path or tempfile.gettempdir(),
max_lines = 1
)
OUTPUT_IMAGE_QUALITY_SLIDER = gradio.Slider(
label = wording.get('output_image_quality_slider_label'),
value = facefusion.globals.output_image_quality,
step = 1,
minimum = 0,
maximum = 100,
visible = is_image(facefusion.globals.target_path)
)
OUTPUT_VIDEO_ENCODER_DROPDOWN = gradio.Dropdown(
label = wording.get('output_video_encoder_dropdown_label'),
choices = facefusion.choices.output_video_encoder,
choices = facefusion.choices.output_video_encoders,
value = facefusion.globals.output_video_encoder,
visible = is_video(facefusion.globals.target_path)
)
@ -35,14 +45,18 @@ def render() -> None:
label = wording.get('output_video_quality_slider_label'),
value = facefusion.globals.output_video_quality,
step = 1,
minimum = 0,
maximum = 100,
visible = is_video(facefusion.globals.target_path)
)
register_ui_component('output_path_textbox', OUTPUT_PATH_TEXTBOX)
def listen() -> None:
OUTPUT_IMAGE_QUALITY_SLIDER.change(update_output_image_quality, inputs = OUTPUT_IMAGE_QUALITY_SLIDER, outputs = OUTPUT_IMAGE_QUALITY_SLIDER)
OUTPUT_VIDEO_ENCODER_DROPDOWN.select(update_output_video_encoder, inputs = OUTPUT_VIDEO_ENCODER_DROPDOWN, outputs = OUTPUT_VIDEO_ENCODER_DROPDOWN)
OUTPUT_VIDEO_QUALITY_SLIDER.change(update_output_video_quality, inputs = OUTPUT_VIDEO_QUALITY_SLIDER, outputs = OUTPUT_VIDEO_QUALITY_SLIDER)
OUTPUT_PATH_TEXTBOX.change(update_output_path, inputs = OUTPUT_PATH_TEXTBOX)
OUTPUT_IMAGE_QUALITY_SLIDER.change(update_output_image_quality, inputs = OUTPUT_IMAGE_QUALITY_SLIDER)
OUTPUT_VIDEO_ENCODER_DROPDOWN.select(update_output_video_encoder, inputs = OUTPUT_VIDEO_ENCODER_DROPDOWN)
OUTPUT_VIDEO_QUALITY_SLIDER.change(update_output_video_quality, inputs = OUTPUT_VIDEO_QUALITY_SLIDER)
multi_component_names : List[ComponentName] =\
[
'source_image',
@ -50,30 +64,31 @@ def listen() -> None:
'target_video'
]
for component_name in multi_component_names:
component = ui.get_component(component_name)
component = get_ui_component(component_name)
if component:
for method in [ 'upload', 'change', 'clear' ]:
getattr(component, method)(remote_update, outputs = [ OUTPUT_IMAGE_QUALITY_SLIDER, OUTPUT_VIDEO_ENCODER_DROPDOWN, OUTPUT_VIDEO_QUALITY_SLIDER ])
def remote_update() -> Tuple[Update, Update, Update]:
def remote_update() -> Tuple[gradio.Slider, gradio.Dropdown, gradio.Slider]:
if is_image(facefusion.globals.target_path):
return gradio.update(visible = True), gradio.update(visible = False), gradio.update(visible = False)
return gradio.Slider(visible = True), gradio.Dropdown(visible = False), gradio.Slider(visible = False)
if is_video(facefusion.globals.target_path):
return gradio.update(visible = False), gradio.update(visible = True), gradio.update(visible = True)
return gradio.update(visible = False), gradio.update(visible = False), gradio.update(visible = False)
return gradio.Slider(visible = False), gradio.Dropdown(visible = True), gradio.Slider(visible = True)
return gradio.Slider(visible = False), gradio.Dropdown(visible = False), gradio.Slider(visible = False)
def update_output_image_quality(output_image_quality : int) -> Update:
def update_output_path(output_path : str) -> None:
facefusion.globals.output_path = output_path
def update_output_image_quality(output_image_quality : int) -> None:
facefusion.globals.output_image_quality = output_image_quality
return gradio.update(value = output_image_quality)
def update_output_video_encoder(output_video_encoder: OutputVideoEncoder) -> Update:
def update_output_video_encoder(output_video_encoder: OutputVideoEncoder) -> None:
facefusion.globals.output_video_encoder = output_video_encoder
return gradio.update(value = output_video_encoder)
def update_output_video_quality(output_video_quality : int) -> Update:
def update_output_video_quality(output_video_quality : int) -> None:
facefusion.globals.output_video_quality = output_video_quality
return gradio.update(value = output_video_quality)

View File

@ -4,15 +4,15 @@ import gradio
import facefusion.globals
from facefusion import wording
from facefusion.typing import Frame, Face
from facefusion.vision import get_video_frame, count_video_frame_total, normalize_frame_color, resize_frame_dimension, read_static_image
from facefusion.face_analyser import get_one_face
from facefusion.face_reference import get_face_reference, set_face_reference
from facefusion.predictor import predict_frame
from facefusion.processors.frame.core import load_frame_processor_module
from facefusion.typing import Frame, Face
from facefusion.uis import core as ui
from facefusion.uis.typing import ComponentName, Update
from facefusion.utilities import is_video, is_image
from facefusion.uis.typing import ComponentName
from facefusion.uis.core import get_ui_component, register_ui_component
PREVIEW_IMAGE : Optional[gradio.Image] = None
PREVIEW_FRAME_SLIDER : Optional[gradio.Slider] = None
@ -24,12 +24,15 @@ def render() -> None:
preview_image_args: Dict[str, Any] =\
{
'label': wording.get('preview_image_label')
'label': wording.get('preview_image_label'),
'interactive': False
}
preview_frame_slider_args: Dict[str, Any] =\
{
'label': wording.get('preview_frame_slider_label'),
'step': 1,
'minimum': 0,
'maximum': 100,
'visible': False
}
conditional_set_face_reference()
@ -49,7 +52,7 @@ def render() -> None:
preview_frame_slider_args['visible'] = True
PREVIEW_IMAGE = gradio.Image(**preview_image_args)
PREVIEW_FRAME_SLIDER = gradio.Slider(**preview_frame_slider_args)
ui.register_component('preview_frame_slider', PREVIEW_FRAME_SLIDER)
register_ui_component('preview_frame_slider', PREVIEW_FRAME_SLIDER)
def listen() -> None:
@ -61,7 +64,7 @@ def listen() -> None:
'target_video'
]
for component_name in multi_component_names:
component = ui.get_component(component_name)
component = get_ui_component(component_name)
if component:
for method in [ 'upload', 'change', 'clear' ]:
getattr(component, method)(update_preview_image, inputs = PREVIEW_FRAME_SLIDER, outputs = PREVIEW_IMAGE)
@ -69,10 +72,13 @@ def listen() -> None:
update_component_names : List[ComponentName] =\
[
'face_recognition_dropdown',
'frame_processors_checkbox_group'
'frame_processors_checkbox_group',
'face_swapper_model_dropdown',
'face_enhancer_model_dropdown',
'frame_enhancer_model_dropdown'
]
for component_name in update_component_names:
component = ui.get_component(component_name)
component = get_ui_component(component_name)
if component:
component.change(update_preview_image, inputs = PREVIEW_FRAME_SLIDER, outputs = PREVIEW_IMAGE)
select_component_names : List[ComponentName] =\
@ -83,15 +89,22 @@ def listen() -> None:
'face_analyser_gender_dropdown'
]
for component_name in select_component_names:
component = ui.get_component(component_name)
component = get_ui_component(component_name)
if component:
component.select(update_preview_image, inputs = PREVIEW_FRAME_SLIDER, outputs = PREVIEW_IMAGE)
reference_face_distance_slider = ui.get_component('reference_face_distance_slider')
if reference_face_distance_slider:
reference_face_distance_slider.change(update_preview_image, inputs = PREVIEW_FRAME_SLIDER, outputs = PREVIEW_IMAGE)
change_component_names : List[ComponentName] =\
[
'reference_face_distance_slider',
'face_enhancer_blend_slider',
'frame_enhancer_blend_slider'
]
for component_name in change_component_names:
component = get_ui_component(component_name)
if component:
component.change(update_preview_image, inputs = PREVIEW_FRAME_SLIDER, outputs = PREVIEW_IMAGE)
def update_preview_image(frame_number : int = 0) -> Update:
def update_preview_image(frame_number : int = 0) -> gradio.Image:
conditional_set_face_reference()
source_face = get_one_face(read_static_image(facefusion.globals.source_path))
reference_face = get_face_reference() if 'reference' in facefusion.globals.face_recognition else None
@ -99,30 +112,30 @@ def update_preview_image(frame_number : int = 0) -> Update:
target_frame = read_static_image(facefusion.globals.target_path)
preview_frame = process_preview_frame(source_face, reference_face, target_frame)
preview_frame = normalize_frame_color(preview_frame)
return gradio.update(value = preview_frame)
return gradio.Image(value = preview_frame)
if is_video(facefusion.globals.target_path):
facefusion.globals.reference_frame_number = frame_number
temp_frame = get_video_frame(facefusion.globals.target_path, facefusion.globals.reference_frame_number)
preview_frame = process_preview_frame(source_face, reference_face, temp_frame)
preview_frame = normalize_frame_color(preview_frame)
return gradio.update(value = preview_frame)
return gradio.update(value = None)
return gradio.Image(value = preview_frame)
return gradio.Image(value = None)
def update_preview_frame_slider(frame_number : int = 0) -> Update:
def update_preview_frame_slider(frame_number : int = 0) -> gradio.Slider:
if is_image(facefusion.globals.target_path):
return gradio.update(value = None, maximum = None, visible = False)
return gradio.Slider(value = None, maximum = None, visible = False)
if is_video(facefusion.globals.target_path):
facefusion.globals.reference_frame_number = frame_number
video_frame_total = count_video_frame_total(facefusion.globals.target_path)
return gradio.update(maximum = video_frame_total, visible = True)
return gradio.update(value = None, maximum = None, visible = False)
return gradio.Slider(maximum = video_frame_total, visible = True)
return gradio.Slider()
def process_preview_frame(source_face : Face, reference_face : Face, temp_frame : Frame) -> Frame:
temp_frame = resize_frame_dimension(temp_frame, 640, 640)
if predict_frame(temp_frame):
return cv2.GaussianBlur(temp_frame, (99, 99), 0)
temp_frame = resize_frame_dimension(temp_frame, 480)
for frame_processor in facefusion.globals.frame_processors:
frame_processor_module = load_frame_processor_module(frame_processor)
if frame_processor_module.pre_process('preview'):

View File

@ -1,40 +0,0 @@
from typing import Optional, List
import gradio
import facefusion.globals
from facefusion import wording
from facefusion.uis import choices
from facefusion.uis.typing import Update
SETTINGS_CHECKBOX_GROUP : Optional[gradio.Checkboxgroup] = None
def render() -> None:
global SETTINGS_CHECKBOX_GROUP
value = []
if facefusion.globals.keep_fps:
value.append('keep-fps')
if facefusion.globals.keep_temp:
value.append('keep-temp')
if facefusion.globals.skip_audio:
value.append('skip-audio')
if facefusion.globals.skip_download:
value.append('skip-download')
SETTINGS_CHECKBOX_GROUP = gradio.Checkboxgroup(
label = wording.get('settings_checkbox_group_label'),
choices = choices.settings,
value = value
)
def listen() -> None:
SETTINGS_CHECKBOX_GROUP.change(update, inputs = SETTINGS_CHECKBOX_GROUP, outputs = SETTINGS_CHECKBOX_GROUP)
def update(settings : List[str]) -> Update:
facefusion.globals.keep_fps = 'keep-fps' in settings
facefusion.globals.keep_temp = 'keep-temp' in settings
facefusion.globals.skip_audio = 'skip-audio' in settings
facefusion.globals.skip_download = 'skip-download' in settings
return gradio.update(value = settings)

View File

@ -3,9 +3,8 @@ import gradio
import facefusion.globals
from facefusion import wording
from facefusion.uis import core as ui
from facefusion.uis.typing import Update
from facefusion.utilities import is_image
from facefusion.uis.core import register_ui_component
SOURCE_FILE : Optional[gradio.File] = None
SOURCE_IMAGE : Optional[gradio.Image] = None
@ -32,16 +31,16 @@ def render() -> None:
visible = is_source_image,
show_label = False
)
ui.register_component('source_image', SOURCE_IMAGE)
register_ui_component('source_image', SOURCE_IMAGE)
def listen() -> None:
SOURCE_FILE.change(update, inputs = SOURCE_FILE, outputs = SOURCE_IMAGE)
def update(file: IO[Any]) -> Update:
def update(file: IO[Any]) -> gradio.Image:
if file and is_image(file.name):
facefusion.globals.source_path = file.name
return gradio.update(value = file.name, visible = True)
return gradio.Image(value = file.name, visible = True)
facefusion.globals.source_path = None
return gradio.update(value = None, visible = False)
return gradio.Image(value = None, visible = False)

View File

@ -4,9 +4,8 @@ import gradio
import facefusion.globals
from facefusion import wording
from facefusion.face_reference import clear_face_reference
from facefusion.uis import core as ui
from facefusion.uis.typing import Update
from facefusion.utilities import is_image, is_video
from facefusion.uis.core import register_ui_component
TARGET_FILE : Optional[gradio.File] = None
TARGET_IMAGE : Optional[gradio.Image] = None
@ -42,21 +41,21 @@ def render() -> None:
visible = is_target_video,
show_label = False
)
ui.register_component('target_image', TARGET_IMAGE)
ui.register_component('target_video', TARGET_VIDEO)
register_ui_component('target_image', TARGET_IMAGE)
register_ui_component('target_video', TARGET_VIDEO)
def listen() -> None:
TARGET_FILE.change(update, inputs = TARGET_FILE, outputs = [ TARGET_IMAGE, TARGET_VIDEO ])
def update(file : IO[Any]) -> Tuple[Update, Update]:
def update(file : IO[Any]) -> Tuple[gradio.Image, gradio.Video]:
clear_face_reference()
if file and is_image(file.name):
facefusion.globals.target_path = file.name
return gradio.update(value = file.name, visible = True), gradio.update(value = None, visible = False)
return gradio.Image(value = file.name, visible = True), gradio.Video(value = None, visible = False)
if file and is_video(file.name):
facefusion.globals.target_path = file.name
return gradio.update(value = None, visible = False), gradio.update(value = file.name, visible = True)
return gradio.Image(value = None, visible = False), gradio.Video(value = file.name, visible = True)
facefusion.globals.target_path = None
return gradio.update(value = None, visible = False), gradio.update(value = None, visible = False)
return gradio.Image(value = None, visible = False), gradio.Video(value = None, visible = False)

View File

@ -5,9 +5,8 @@ import facefusion.choices
import facefusion.globals
from facefusion import wording
from facefusion.typing import TempFrameFormat
from facefusion.uis import core as ui
from facefusion.uis.typing import Update
from facefusion.utilities import is_video
from facefusion.uis.core import get_ui_component
TEMP_FRAME_FORMAT_DROPDOWN : Optional[gradio.Dropdown] = None
TEMP_FRAME_QUALITY_SLIDER : Optional[gradio.Slider] = None
@ -19,7 +18,7 @@ def render() -> None:
TEMP_FRAME_FORMAT_DROPDOWN = gradio.Dropdown(
label = wording.get('temp_frame_format_dropdown_label'),
choices = facefusion.choices.temp_frame_format,
choices = facefusion.choices.temp_frame_formats,
value = facefusion.globals.temp_frame_format,
visible = is_video(facefusion.globals.target_path)
)
@ -27,30 +26,30 @@ def render() -> None:
label = wording.get('temp_frame_quality_slider_label'),
value = facefusion.globals.temp_frame_quality,
step = 1,
minimum = 0,
maximum = 100,
visible = is_video(facefusion.globals.target_path)
)
def listen() -> None:
TEMP_FRAME_FORMAT_DROPDOWN.select(update_temp_frame_format, inputs = TEMP_FRAME_FORMAT_DROPDOWN, outputs = TEMP_FRAME_FORMAT_DROPDOWN)
TEMP_FRAME_QUALITY_SLIDER.change(update_temp_frame_quality, inputs = TEMP_FRAME_QUALITY_SLIDER, outputs = TEMP_FRAME_QUALITY_SLIDER)
target_video = ui.get_component('target_video')
TEMP_FRAME_FORMAT_DROPDOWN.select(update_temp_frame_format, inputs = TEMP_FRAME_FORMAT_DROPDOWN)
TEMP_FRAME_QUALITY_SLIDER.change(update_temp_frame_quality, inputs = TEMP_FRAME_QUALITY_SLIDER)
target_video = get_ui_component('target_video')
if target_video:
for method in [ 'upload', 'change', 'clear' ]:
getattr(target_video, method)(remote_update, outputs = [ TEMP_FRAME_FORMAT_DROPDOWN, TEMP_FRAME_QUALITY_SLIDER ])
def remote_update() -> Tuple[Update, Update]:
def remote_update() -> Tuple[gradio.Dropdown, gradio.Slider]:
if is_video(facefusion.globals.target_path):
return gradio.update(visible = True), gradio.update(visible = True)
return gradio.update(visible = False), gradio.update(visible = False)
return gradio.Dropdown(visible = True), gradio.Slider(visible = True)
return gradio.Dropdown(visible = False), gradio.Slider(visible = False)
def update_temp_frame_format(temp_frame_format : TempFrameFormat) -> Update:
def update_temp_frame_format(temp_frame_format : TempFrameFormat) -> None:
facefusion.globals.temp_frame_format = temp_frame_format
return gradio.update(value = temp_frame_format)
def update_temp_frame_quality(temp_frame_quality : int) -> Update:
def update_temp_frame_quality(temp_frame_quality : int) -> None:
facefusion.globals.temp_frame_quality = temp_frame_quality
return gradio.update(value = temp_frame_quality)

View File

@ -4,9 +4,8 @@ import gradio
import facefusion.globals
from facefusion import wording
from facefusion.vision import count_video_frame_total
from facefusion.uis import core as ui
from facefusion.uis.typing import Update
from facefusion.utilities import is_video
from facefusion.uis.core import get_ui_component
TRIM_FRAME_START_SLIDER : Optional[gradio.Slider] = None
TRIM_FRAME_END_SLIDER : Optional[gradio.Slider] = None
@ -20,12 +19,16 @@ def render() -> None:
{
'label': wording.get('trim_frame_start_slider_label'),
'step': 1,
'minimum': 0,
'maximum': 100,
'visible': False
}
trim_frame_end_slider_args : Dict[str, Any] =\
{
'label': wording.get('trim_frame_end_slider_label'),
'step': 1,
'minimum': 0,
'maximum': 100,
'visible': False
}
if is_video(facefusion.globals.target_path):
@ -41,29 +44,27 @@ def render() -> None:
def listen() -> None:
TRIM_FRAME_START_SLIDER.change(update_trim_frame_start, inputs = TRIM_FRAME_START_SLIDER, outputs = TRIM_FRAME_START_SLIDER)
TRIM_FRAME_END_SLIDER.change(update_trim_frame_end, inputs = TRIM_FRAME_END_SLIDER, outputs = TRIM_FRAME_END_SLIDER)
target_video = ui.get_component('target_video')
TRIM_FRAME_START_SLIDER.change(update_trim_frame_start, inputs = TRIM_FRAME_START_SLIDER)
TRIM_FRAME_END_SLIDER.change(update_trim_frame_end, inputs = TRIM_FRAME_END_SLIDER)
target_video = get_ui_component('target_video')
if target_video:
for method in [ 'upload', 'change', 'clear' ]:
getattr(target_video, method)(remote_update, outputs = [ TRIM_FRAME_START_SLIDER, TRIM_FRAME_END_SLIDER ])
def remote_update() -> Tuple[Update, Update]:
def remote_update() -> Tuple[gradio.Slider, gradio.Slider]:
if is_video(facefusion.globals.target_path):
video_frame_total = count_video_frame_total(facefusion.globals.target_path)
facefusion.globals.trim_frame_start = None
facefusion.globals.trim_frame_end = None
return gradio.update(value = 0, maximum = video_frame_total, visible = True), gradio.update(value = video_frame_total, maximum = video_frame_total, visible = True)
return gradio.update(value = None, maximum = None, visible = False), gradio.update(value = None, maximum = None, visible = False)
return gradio.Slider(value = 0, maximum = video_frame_total, visible = True), gradio.Slider(value = video_frame_total, maximum = video_frame_total, visible = True)
return gradio.Slider(value = None, maximum = None, visible = False), gradio.Slider(value = None, maximum = None, visible = False)
def update_trim_frame_start(trim_frame_start : int) -> Update:
def update_trim_frame_start(trim_frame_start : int) -> None:
facefusion.globals.trim_frame_start = trim_frame_start if trim_frame_start > 0 else None
return gradio.update(value = trim_frame_start)
def update_trim_frame_end(trim_frame_end : int) -> Update:
def update_trim_frame_end(trim_frame_end : int) -> None:
video_frame_total = count_video_frame_total(facefusion.globals.target_path)
facefusion.globals.trim_frame_end = trim_frame_end if trim_frame_end < video_frame_total else None
return gradio.update(value = trim_frame_end)

View File

@ -10,13 +10,14 @@ from tqdm import tqdm
import facefusion.globals
from facefusion import wording
from facefusion.predictor import predict_stream
from facefusion.typing import Frame, Face
from facefusion.face_analyser import get_one_face
from facefusion.processors.frame.core import load_frame_processor_module
from facefusion.uis import core as ui
from facefusion.uis.typing import StreamMode, WebcamMode, Update
from facefusion.processors.frame.core import get_frame_processors_modules
from facefusion.utilities import open_ffmpeg
from facefusion.vision import normalize_frame_color, read_static_image
from facefusion.uis.typing import StreamMode, WebcamMode
from facefusion.uis.core import get_ui_component
WEBCAM_IMAGE : Optional[gradio.Image] = None
WEBCAM_START_BUTTON : Optional[gradio.Button] = None
@ -33,25 +34,27 @@ def render() -> None:
)
WEBCAM_START_BUTTON = gradio.Button(
value = wording.get('start_button_label'),
variant = 'primary'
variant = 'primary',
size = 'sm'
)
WEBCAM_STOP_BUTTON = gradio.Button(
value = wording.get('stop_button_label')
value = wording.get('stop_button_label'),
size = 'sm'
)
def listen() -> None:
start_event = None
webcam_mode_radio = ui.get_component('webcam_mode_radio')
webcam_resolution_dropdown = ui.get_component('webcam_resolution_dropdown')
webcam_fps_slider = ui.get_component('webcam_fps_slider')
webcam_mode_radio = get_ui_component('webcam_mode_radio')
webcam_resolution_dropdown = get_ui_component('webcam_resolution_dropdown')
webcam_fps_slider = get_ui_component('webcam_fps_slider')
if webcam_mode_radio and webcam_resolution_dropdown and webcam_fps_slider:
start_event = WEBCAM_START_BUTTON.click(start, inputs = [ webcam_mode_radio, webcam_resolution_dropdown, webcam_fps_slider ], outputs = WEBCAM_IMAGE)
webcam_mode_radio.change(stop, outputs = WEBCAM_IMAGE, cancels = start_event)
webcam_resolution_dropdown.change(stop, outputs = WEBCAM_IMAGE, cancels = start_event)
webcam_fps_slider.change(stop, outputs = WEBCAM_IMAGE, cancels = start_event)
WEBCAM_STOP_BUTTON.click(stop, cancels = start_event)
source_image = ui.get_component('source_image')
source_image = get_ui_component('source_image')
if source_image:
for method in [ 'upload', 'change', 'clear' ]:
getattr(source_image, method)(stop, cancels = start_event)
@ -61,10 +64,8 @@ def start(mode: WebcamMode, resolution: str, fps: float) -> Generator[Frame, Non
facefusion.globals.face_recognition = 'many'
source_face = get_one_face(read_static_image(facefusion.globals.source_path))
stream = None
if mode == 'stream_udp':
stream = open_stream('udp', resolution, fps)
if mode == 'stream_v4l2':
stream = open_stream('v4l2', resolution, fps)
if mode in [ 'udp', 'v4l2' ]:
stream = open_stream(mode, resolution, fps) # type: ignore[arg-type]
capture = capture_webcam(resolution, fps)
if capture.isOpened():
for capture_frame in multi_process_capture(source_face, capture):
@ -80,6 +81,8 @@ def multi_process_capture(source_face: Face, capture : cv2.VideoCapture) -> Gene
deque_capture_frames : Deque[Frame] = deque()
while True:
_, capture_frame = capture.read()
if predict_stream(capture_frame):
return
future = executor.submit(process_stream_frame, source_face, capture_frame)
futures.append(future)
for future_done in [ future for future in futures if future.done() ]:
@ -91,8 +94,8 @@ def multi_process_capture(source_face: Face, capture : cv2.VideoCapture) -> Gene
progress.update()
def stop() -> Update:
return gradio.update(value = None)
def stop() -> gradio.Image:
return gradio.Image(value = None)
def capture_webcam(resolution : str, fps : float) -> cv2.VideoCapture:
@ -109,8 +112,7 @@ def capture_webcam(resolution : str, fps : float) -> cv2.VideoCapture:
def process_stream_frame(source_face : Face, temp_frame : Frame) -> Frame:
for frame_processor in facefusion.globals.frame_processors:
frame_processor_module = load_frame_processor_module(frame_processor)
for frame_processor_module in get_frame_processors_modules(facefusion.globals.frame_processors):
if frame_processor_module.pre_process('stream'):
temp_frame = frame_processor_module.process_frame(
source_face,

View File

@ -3,8 +3,7 @@ import gradio
from facefusion import wording
from facefusion.uis import choices
from facefusion.uis import core as ui
from facefusion.uis.typing import Update
from facefusion.uis.core import register_ui_component
WEBCAM_MODE_RADIO : Optional[gradio.Radio] = None
WEBCAM_RESOLUTION_DROPDOWN : Optional[gradio.Dropdown] = None
@ -18,25 +17,21 @@ def render() -> None:
WEBCAM_MODE_RADIO = gradio.Radio(
label = wording.get('webcam_mode_radio_label'),
choices = choices.webcam_mode,
choices = choices.webcam_modes,
value = 'inline'
)
WEBCAM_RESOLUTION_DROPDOWN = gradio.Dropdown(
label = wording.get('webcam_resolution_dropdown'),
choices = choices.webcam_resolution,
value = choices.webcam_resolution[0]
choices = choices.webcam_resolutions,
value = choices.webcam_resolutions[0]
)
WEBCAM_FPS_SLIDER = gradio.Slider(
label = wording.get('webcam_fps_slider'),
minimum = 1,
maximum = 60,
value = 25,
step = 1,
value = 25
minimum = 1,
maximum = 60
)
ui.register_component('webcam_mode_radio', WEBCAM_MODE_RADIO)
ui.register_component('webcam_resolution_dropdown', WEBCAM_RESOLUTION_DROPDOWN)
ui.register_component('webcam_fps_slider', WEBCAM_FPS_SLIDER)
def update() -> Update:
return gradio.update(value = None)
register_ui_component('webcam_mode_radio', WEBCAM_MODE_RADIO)
register_ui_component('webcam_resolution_dropdown', WEBCAM_RESOLUTION_DROPDOWN)
register_ui_component('webcam_fps_slider', WEBCAM_FPS_SLIDER)

View File

@ -1,5 +1,5 @@
from types import ModuleType
from typing import Dict, Optional, Any, List
from types import ModuleType
import importlib
import sys
import gradio
@ -7,8 +7,9 @@ import gradio
import facefusion.globals
from facefusion import metadata, wording
from facefusion.uis.typing import Component, ComponentName
from facefusion.utilities import resolve_relative_path
COMPONENTS: Dict[ComponentName, Component] = {}
UI_COMPONENTS: Dict[ComponentName, Component] = {}
UI_LAYOUT_MODULES : List[ModuleType] = []
UI_LAYOUT_METHODS =\
[
@ -43,8 +44,18 @@ def get_ui_layouts_modules(ui_layouts : List[str]) -> List[ModuleType]:
return UI_LAYOUT_MODULES
def get_ui_component(name: ComponentName) -> Optional[Component]:
if name in UI_COMPONENTS:
return UI_COMPONENTS[name]
return None
def register_ui_component(name: ComponentName, component: Component) -> None:
UI_COMPONENTS[name] = component
def launch() -> None:
with gradio.Blocks(theme = get_theme(), title = metadata.get('name') + ' ' + metadata.get('version')) as ui:
with gradio.Blocks(theme = get_theme(), css = get_css(), title = metadata.get('name') + ' ' + metadata.get('version')) as ui:
for ui_layout in facefusion.globals.ui_layouts:
ui_layout_module = load_ui_layout_module(ui_layout)
if ui_layout_module.pre_render():
@ -57,22 +68,63 @@ def launch() -> None:
def get_theme() -> gradio.Theme:
return gradio.themes.Soft(
return gradio.themes.Base(
primary_hue = gradio.themes.colors.red,
secondary_hue = gradio.themes.colors.gray,
font = gradio.themes.GoogleFont('Inter')
secondary_hue = gradio.themes.colors.neutral,
font = gradio.themes.GoogleFont('Open Sans')
).set(
background_fill_primary = '*neutral_50',
block_label_text_size = '*text_sm',
block_title_text_size = '*text_sm'
background_fill_primary = '*neutral_100',
block_background_fill = 'white',
block_border_width = '0',
block_label_background_fill = '*primary_100',
block_label_background_fill_dark = '*primary_600',
block_label_border_width = 'none',
block_label_margin = '0.5rem',
block_label_radius = '*radius_md',
block_label_text_color = '*primary_500',
block_label_text_color_dark = 'white',
block_label_text_weight = '600',
block_title_background_fill = '*primary_100',
block_title_background_fill_dark = '*primary_600',
block_title_padding = '*block_label_padding',
block_title_radius = '*block_label_radius',
block_title_text_color = '*primary_500',
block_title_text_size = '*text_sm',
block_title_text_weight = '600',
block_padding = '0.5rem',
border_color_primary = 'transparent',
border_color_primary_dark = 'transparent',
button_large_padding = '2rem 0.5rem',
button_large_text_weight = 'normal',
button_primary_background_fill = '*primary_500',
button_primary_text_color = 'white',
button_secondary_background_fill = 'white',
button_secondary_border_color = 'transparent',
button_secondary_border_color_dark = 'transparent',
button_secondary_border_color_hover = 'transparent',
button_secondary_border_color_hover_dark = 'transparent',
button_secondary_text_color = '*neutral_800',
button_small_padding = '0.75rem',
checkbox_background_color = '*neutral_200',
checkbox_background_color_selected = '*primary_600',
checkbox_background_color_selected_dark = '*primary_700',
checkbox_border_color_focus = '*primary_500',
checkbox_border_color_focus_dark = '*primary_600',
checkbox_border_color_selected = '*primary_600',
checkbox_border_color_selected_dark = '*primary_700',
checkbox_label_background_fill = '*neutral_50',
checkbox_label_background_fill_hover = '*neutral_50',
checkbox_label_background_fill_selected = '*primary_500',
checkbox_label_background_fill_selected_dark = '*primary_600',
checkbox_label_text_color_selected = 'white',
input_background_fill = '*neutral_50',
shadow_drop = 'none',
slider_color = '*primary_500',
slider_color_dark = '*primary_600'
)
def get_component(name: ComponentName) -> Optional[Component]:
if name in COMPONENTS:
return COMPONENTS[name]
return None
def register_component(name: ComponentName, component: Component) -> None:
COMPONENTS[name] = component
def get_css() -> str:
fixes_css_path = resolve_relative_path('uis/assets/fixes.css')
overrides_css_path = resolve_relative_path('uis/assets/overrides.css')
return open(fixes_css_path, 'r').read() + open(overrides_css_path, 'r').read()

View File

@ -1,8 +1,8 @@
import gradio
import facefusion.globals
from facefusion.uis.components import about, processors, execution, execution_thread_count, execution_queue_count, limit_resources, benchmark_settings, benchmark
from facefusion.utilities import conditional_download
from facefusion.uis.components import about, frame_processors, frame_processors_options, execution, execution_thread_count, execution_queue_count, limit_resources, benchmark_options, benchmark
def pre_check() -> bool:
@ -30,10 +30,11 @@ def render() -> gradio.Blocks:
with gradio.Blocks() as layout:
with gradio.Row():
with gradio.Column(scale = 2):
with gradio.Box():
with gradio.Blocks():
about.render()
with gradio.Blocks():
processors.render()
frame_processors.render()
frame_processors_options.render()
with gradio.Blocks():
execution.render()
execution_thread_count.render()
@ -41,7 +42,7 @@ def render() -> gradio.Blocks:
with gradio.Blocks():
limit_resources.render()
with gradio.Blocks():
benchmark_settings.render()
benchmark_options.render()
with gradio.Column(scale= 5):
with gradio.Blocks():
benchmark.render()
@ -49,15 +50,14 @@ def render() -> gradio.Blocks:
def listen() -> None:
processors.listen()
frame_processors.listen()
frame_processors_options.listen()
execution.listen()
execution_thread_count.listen()
execution_queue_count.listen()
limit_resources.listen()
benchmark_settings.listen()
benchmark.listen()
def run(ui : gradio.Blocks) -> None:
ui.queue(concurrency_count = 2, api_open = False)
ui.launch(show_api = False)
ui.queue(concurrency_count = 2, api_open = False).launch(show_api = False)

View File

@ -1,6 +1,6 @@
import gradio
from facefusion.uis.components import about, processors, execution, execution_thread_count, execution_queue_count, limit_resources, temp_frame, output_settings, settings, source, target, preview, trim_frame, face_analyser, face_selector, output
from facefusion.uis.components import about, frame_processors, frame_processors_options, execution, execution_thread_count, execution_queue_count, limit_resources, temp_frame, output_options, common_options, source, target, preview, trim_frame, face_analyser, face_selector, output
def pre_check() -> bool:
@ -15,10 +15,11 @@ def render() -> gradio.Blocks:
with gradio.Blocks() as layout:
with gradio.Row():
with gradio.Column(scale = 2):
with gradio.Box():
with gradio.Blocks():
about.render()
with gradio.Blocks():
processors.render()
frame_processors.render()
frame_processors_options.render()
with gradio.Blocks():
execution.render()
execution_thread_count.render()
@ -28,9 +29,7 @@ def render() -> gradio.Blocks:
with gradio.Blocks():
temp_frame.render()
with gradio.Blocks():
output_settings.render()
with gradio.Blocks():
settings.render()
output_options.render()
with gradio.Column(scale = 2):
with gradio.Blocks():
source.render()
@ -47,18 +46,21 @@ def render() -> gradio.Blocks:
face_selector.render()
with gradio.Row():
face_analyser.render()
with gradio.Blocks():
common_options.render()
return layout
def listen() -> None:
processors.listen()
frame_processors.listen()
frame_processors_options.listen()
execution.listen()
execution_thread_count.listen()
execution_queue_count.listen()
limit_resources.listen()
temp_frame.listen()
output_settings.listen()
settings.listen()
output_options.listen()
common_options.listen()
source.listen()
target.listen()
preview.listen()

View File

@ -1,6 +1,6 @@
import gradio
from facefusion.uis.components import about, processors, execution, execution_thread_count, webcam_settings, source, webcam
from facefusion.uis.components import about, frame_processors, frame_processors_options, execution, execution_thread_count, webcam_options, source, webcam
def pre_check() -> bool:
@ -15,15 +15,16 @@ def render() -> gradio.Blocks:
with gradio.Blocks() as layout:
with gradio.Row():
with gradio.Column(scale = 2):
with gradio.Box():
with gradio.Blocks():
about.render()
with gradio.Blocks():
processors.render()
frame_processors.render()
frame_processors_options.render()
with gradio.Blocks():
execution.render()
execution_thread_count.render()
with gradio.Blocks():
webcam_settings.render()
webcam_options.render()
with gradio.Blocks():
source.render()
with gradio.Column(scale = 5):
@ -33,7 +34,8 @@ def render() -> gradio.Blocks:
def listen() -> None:
processors.listen()
frame_processors.listen()
frame_processors_options.listen()
execution.listen()
execution_thread_count.listen()
source.listen()
@ -41,5 +43,4 @@ def listen() -> None:
def run(ui : gradio.Blocks) -> None:
ui.queue(concurrency_count = 2, api_open = False)
ui.launch(show_api = False)
ui.queue(concurrency_count = 2, api_open = False).launch(show_api = False)

View File

@ -1,4 +1,4 @@
from typing import Literal, Dict, Any
from typing import Literal
import gradio
Component = gradio.File or gradio.Image or gradio.Video or gradio.Slider
@ -15,12 +15,18 @@ ComponentName = Literal\
'face_analyser_age_dropdown',
'face_analyser_gender_dropdown',
'frame_processors_checkbox_group',
'face_swapper_model_dropdown',
'face_enhancer_model_dropdown',
'face_enhancer_blend_slider',
'frame_enhancer_model_dropdown',
'frame_enhancer_blend_slider',
'output_path_textbox',
'benchmark_runs_checkbox_group',
'benchmark_cycles_slider',
'player_url_textbox_label',
'webcam_mode_radio',
'webcam_resolution_dropdown',
'webcam_fps_slider'
]
WebcamMode = Literal[ 'inline', 'stream_udp', 'stream_v4l2' ]
WebcamMode = Literal[ 'inline', 'udp', 'v4l2' ]
StreamMode = Literal[ 'udp', 'v4l2' ]
Update = Dict[Any, Any]

View File

@ -70,13 +70,13 @@ def merge_video(target_path : str, fps : float) -> bool:
temp_frames_pattern = get_temp_frames_pattern(target_path, '%04d')
commands = [ '-hwaccel', 'auto', '-r', str(fps), '-i', temp_frames_pattern, '-c:v', facefusion.globals.output_video_encoder ]
if facefusion.globals.output_video_encoder in [ 'libx264', 'libx265' ]:
output_video_compression = round(51 - (facefusion.globals.output_video_quality * 0.5))
output_video_compression = round(51 - (facefusion.globals.output_video_quality * 0.51))
commands.extend([ '-crf', str(output_video_compression) ])
if facefusion.globals.output_video_encoder in [ 'libvpx-vp9' ]:
output_video_compression = round(63 - (facefusion.globals.output_video_quality * 0.5))
output_video_compression = round(63 - (facefusion.globals.output_video_quality * 0.63))
commands.extend([ '-crf', str(output_video_compression) ])
if facefusion.globals.output_video_encoder in [ 'h264_nvenc', 'hevc_nvenc' ]:
output_video_compression = round(51 - (facefusion.globals.output_video_quality * 0.5))
output_video_compression = round(51 - (facefusion.globals.output_video_quality * 0.51))
commands.extend([ '-cq', str(output_video_compression) ])
commands.extend([ '-pix_fmt', 'yuv420p', '-colorspace', 'bt709', '-y', temp_output_video_path ])
return run_ffmpeg(commands)
@ -187,7 +187,7 @@ def conditional_download(download_directory_path : str, urls : List[str]) -> Non
initial = 0
if initial < total:
with tqdm(total = total, initial = initial, desc = wording.get('downloading'), unit = 'B', unit_scale = True, unit_divisor = 1024) as progress:
subprocess.Popen([ 'curl', '--create-dirs', '--silent', '--location', '--continue-at', '-', '--output', download_file_path, url ])
subprocess.Popen([ 'curl', '--create-dirs', '--silent', '--insecure', '--location', '--continue-at', '-', '--output', download_file_path, url ])
current = initial
while current < total:
if is_file(download_file_path):
@ -196,12 +196,12 @@ def conditional_download(download_directory_path : str, urls : List[str]) -> Non
@lru_cache(maxsize = None)
def get_download_size(url : str) -> Optional[int]:
def get_download_size(url : str) -> int:
try:
response = urllib.request.urlopen(url) # type: ignore[attr-defined]
return int(response.getheader('Content-Length'))
except (OSError, ValueError):
return None
return 0
def is_download_done(url : str, file_path : str) -> bool:

View File

@ -40,12 +40,13 @@ def normalize_frame_color(frame : Frame) -> Frame:
return cv2.cvtColor(frame, cv2.COLOR_BGR2RGB)
def resize_frame_dimension(frame : Frame, max_height : int) -> Frame:
def resize_frame_dimension(frame : Frame, max_width : int, max_height : int) -> Frame:
height, width = frame.shape[:2]
if height > max_height:
scale = max_height / height
max_width = int(width * scale)
frame = cv2.resize(frame, (max_width, max_height))
if height > max_height or width > max_width:
scale = min(max_height / height, max_width / width)
new_width = int(width * scale)
new_height = int(height * scale)
return cv2.resize(frame, (new_width, new_height))
return frame

View File

@ -2,11 +2,13 @@ WORDING =\
{
'python_not_supported': 'Python version is not supported, upgrade to {version} or higher',
'ffmpeg_not_installed': 'FFMpeg is not installed',
'onnxruntime_help': 'select the onnxruntime to be installed',
'install_dependency_help': 'select the variant of {dependency} to install',
'source_help': 'select a source image',
'target_help': 'select a target image or video',
'output_help': 'specify the output file or directory',
'frame_processors_help': 'choose from the available frame processors (choices: {choices}, ...)',
'frame_processor_model_help': 'choose from the mode for the frame processor',
'frame_processor_blend_help': 'specify the blend factor for the frame processor',
'ui_layouts_help': 'choose from the available ui layouts (choices: {choices}, ...)',
'keep_fps_help': 'preserve the frames per second (fps) of the target',
'keep_temp_help': 'retain temporary frames after processing',
@ -58,6 +60,7 @@ WORDING =\
'frame_processor_not_implemented': 'Frame processor {frame_processor} not implemented correctly',
'ui_layout_not_loaded': 'UI layout {ui_layout} could not be loaded',
'ui_layout_not_implemented': 'UI layout {ui_layout} not implemented correctly',
'donate_button_label': 'DONATE',
'start_button_label': 'START',
'stop_button_label': 'STOP',
'clear_button_label': 'CLEAR',
@ -82,7 +85,12 @@ WORDING =\
'preview_image_label': 'PREVIEW',
'preview_frame_slider_label': 'PREVIEW FRAME',
'frame_processors_checkbox_group_label': 'FRAME PROCESSORS',
'settings_checkbox_group_label': 'SETTINGS',
'face_swapper_model_dropdown_label': 'FACE SWAPPER MODEL',
'face_enhancer_model_dropdown_label': 'FACE ENHANCER MODEL',
'face_enhancer_blend_slider_label': 'FACE ENHANCER BLEND',
'frame_enhancer_model_dropdown_label': 'FRAME ENHANCER MODEL',
'frame_enhancer_blend_slider_label': 'FRAME ENHANCER BLEND',
'common_options_checkbox_group_label': 'OPTIONS',
'temp_frame_format_dropdown_label': 'TEMP FRAME FORMAT',
'temp_frame_quality_slider_label': 'TEMP FRAME QUALITY',
'trim_frame_start_slider_label': 'TRIM FRAME START',

View File

@ -3,4 +3,4 @@
from facefusion import installer
if __name__ == '__main__':
installer.run()
installer.cli()

View File

@ -1,14 +1,15 @@
gfpgan==1.3.8
gradio==3.44.3
basicsr==1.4.2
gradio==3.47.1
insightface==0.7.3
numpy==1.24.3
onnx==1.14.1
onnxruntime==1.15.1
opencv-python==4.8.0.76
onnxruntime==1.16.0
opencv-python==4.8.1.78
opennsfw2==0.10.2
pillow==10.0.1
protobuf==4.24.2
psutil==5.9.5
realesrgan==0.3.0
tensorflow==2.13.0
torch==2.1.0
tqdm==4.66.1

2
run.py
View File

@ -3,4 +3,4 @@
from facefusion import core
if __name__ == '__main__':
core.run()
core.cli()

View File

@ -143,7 +143,7 @@ def test_is_video() -> None:
def test_get_download_size() -> None:
assert get_download_size('https://github.com/facefusion/facefusion-assets/releases/download/examples/target-240p.mp4') == 191675
assert get_download_size('https://github.com/facefusion/facefusion-assets/releases/download/examples/target-360p.mp4') == 370732
assert get_download_size('invalid') is None
assert get_download_size('invalid') == 0
def test_is_download_done() -> None: