
* Cleanup after age modifier PR * Cleanup after age modifier PR * Use OpenVino 2024.2.0 for installer * Prepare 3.0.0 for installer * Fix benchmark suite, Introduce sync_item() for state manager * Fix lint * Render slide preview also in lower res * Lower thread and queue count to avoid false usage * Fix spacing * Feat/jobs UI (#627) * Jobs UI part1 * Change naming * Jobs UI part2 * Jobs UI part3 * Jobs UI part4 * Jobs UI part4 * Jobs UI part5 * Jobs UI part6 * Jobs UI part7 * Jobs UI part8 * Jobs UI part9 * Jobs UI part10 * Jobs UI part11 * Jobs UI part12 * Fix rebase * Jobs UI part13 * Jobs UI part14 * Jobs UI part15 * changes (#626) * Remove useless ui registration * Remove useless ui registration * move job_list.py replace [0] with get_first() * optimize imports * fix date None problem add test job list * Jobs UI part16 * Jobs UI part17 * Jobs UI part18 * Jobs UI part19 * Jobs UI part20 * Jobs UI part21 * Jobs UI part22 * move job_list_options * Add label to job status checkbox group * changes * changes --------- Co-authored-by: Harisreedhar <46858047+harisreedhar@users.noreply.github.com> * Update some dependencies * UI helper to convert 'none' * validate job (#628) * changes * changes * add test * changes * changes * Minor adjustments * Replace is_json with is_file * Handle empty and invalid json in job_list * Handle empty and invalid json in job_list * Handle empty and invalid json in job_list * Work on the job manager UI * Cosmetic changes on common helper * Just make it work for now * Just make it work for now * Just make it work for now * Streamline the step index lookups * Hide footer * Simplify instant runner * Simplify instant runner UI and job manager UI * Fix empty step choices * Fix empty step choices * Fix none values in UI * Rework on benchmark (add warmup) and job list * Improve ValueAndUnit * Add step 1 of x output * Cosmetic changes on the UI * Fix invalid job file names * Update preview * Introducing has_step() and sorting out insert behaviour * Introducing has_step() and sorting out insert behaviour * Add [ none ] to some job id dropdowns * Make updated dropdown values kinda perfect * Make updated dropdown values kinda perfect * Fix testing * Minor improvement on UI * Fix false config lookup * Remove TensorRT as our models are not made for it * Feat/cli commands second try rev2 (#640) * Refactor CLI to commands * Refactor CLI to commands part2 * Refactor CLI to commands part3 * Refactor CLI to commands part4 * Rename everything to facefusion.py * Refactor CLI to commands part5 * Refactor CLI to commands part6 * Adjust testing * Fix lint * Fix lint * Fix lint * Refactor CLI to commands part7 * Extend State typing * Fix false config lookup, adjust logical orders * Move away from passing program part1 * Move away from passing program part2 * Move away from passing program part3 * Fix lint * Move away from passing program part4 * ui-args update * ui-args update * ui-args update * temporary type fix * Move away from passing program part5 * remove unused * creates args.py * Move away from passing program part6 * Move away from passing program part7 --------- Co-authored-by: Harisreedhar <46858047+harisreedhar@users.noreply.github.com> * Minor optimizations * Update commands in README * Fix job-retry command * Fix multi runs via UI * add more job keys * Cleanup codebase * One method to create inference session (#641) * One method to create inference session * Remove warnings, as there are none * Remember job id during processing * Fix face masker config block * Change wording * Prevent age modifier from using CoreML * add expression restorer (#642) * add expression restorer * fix import * fix lint * changes * changes * changes * Host the final model for expression restorer * Insert step on the given index * UI workover (#644) * UI workover part1 * Introduce ComponentOptions * Only set Media components to None when visibility changes * Clear static faces and reference faces between step processing * Minor changes * Minor changes * Fix testing * Enable test_sanitize_path_for_windows (#646) * Dynamic download during job processing (#647) * Fix face masker UI * Rename run-headless to headless-run * Feat/split frame processor UI (#649) * Split frame processor UI * Split frame processor UI part3, Refactor get_model_initializer * Split frame processor UI part4 * Feat/rename frame processors (#651) * Rename frame processors * Rename frame processors part2 * Fix imports Conflicts: facefusion/uis/layouts/benchmark.py facefusion/uis/layouts/default.py * Fix imports * Cosmetic changes * Fix multi threading for ROCm * Change temp frames pattern * Adjust terminal help * remove expression restorer (#653) * Expression restorer as processor (#655) * add expression restorer * changes * Cleanup code * Add TensorRT support back * Add TensorRT support back * Add TensorRT support back * changes (#656) * Change minor wording * Fix face enhancer slider * Add more typing * Fix expression-restorer when using trim (#659) * changes * changes * Rework/model and inference pool part2 (#660) * Rework on model and inference pool * Introduce inference sources and pools part1 * Introduce inference sources and pools part2 * Introduce inference sources and pools part3 * Introduce inference sources and pools part4 * Introduce inference sources and pools part5 * Introduce inference sources and pools part6 * Introduce inference sources and pools part6 * Introduce inference sources and pools part6 * Introduce inference sources and pools part7 * Introduce inference sources and pools part7 * Introduce inference sources and pools part8 * Introduce inference sources and pools part9 * Introduce inference sources and pools part10 * Introduce inference sources and pools part11 * Introduce inference sources and pools part11 * Introduce inference sources and pools part11 * Introduce inference sources and pools part12 * Reorganize the face masker UI * Fix trim in UI * Feat/hashed sources (#668) * Introduce source helper * Remove post_check() and just use process_manager * Remove post_check() part2 * Add hash based downloads * Add hash based downloads part2 * Add hash based downloads part3 * Add hash based downloads part4 * Add hash based downloads part5 * Add hash based downloads part6 * Add hash based downloads part7 * Add hash based downloads part7 * Add hash based downloads part8 * Remove print * Prepare 3.0.0 release * Fix UI * Release the check when really done * Update inputs for live portrait * Update to 3.0.0 releases, extend download postfix * Move files to the right place * Logging for the hash and source validation * Changing logic to handle corrupt sources * Fix typo * Use names over get_inputs(), Remove set_options() call * Age modifier now works for CoreML too * Update age_modifier.py * Add video encoder h264_videotoolbox and hevc_videotoolbox * Face editor add eye gaze & remove open factor sliders (#670) * changes * add eye gaze * changes * cleanup * add eyebrow control * changes * changes * Feat/terminal UI (#671) * Introduce terminal to the UI * Introduce terminal to the UI part2 * Introduce terminal to the UI part2 * Introduce terminal to the UI part2 * Calc range step to avoid weird values * Use Sequence for ranges * Use Sequence for ranges * changes (#673) * Use Sequence for ranges * Finalize terminal UI * Finalize terminal UI * Webcam cosmetics, Fix normalize fps to accept int * Cosmetic changes * Finalize terminal UI * Rename leftover typings * Fix wording * Fix rounding in metavar * Fix rounding in metavar * Rename to face classifier * Face editor lip moves (#677) * changes * changes * changes * Fix rounding in metavar * Rename to face classifier * changes * changes * update naming --------- Co-authored-by: henryruhs <info@henryruhs.com> * Fix wording * Feat/many landmarker + face analyser breakdown (#678) * Basic multi landmarker integration * Simplify some method names * Break into face_detector and face_landmarker * Fix cosmetics * Fix testing * Break into face_attributor and face_recognizer * Clear them all * Clear them all * Rename to face classifier * Rename to face classifier * Fix testing * Fix stuff * Add face landmarker model to UI * Add face landmarker model to UI part2 * Split the config * Split the UI * Improvement from code review * Improvement from code review * Validate args also for sub parsers * Remove clear of processors in process step * Allow finder control for the face editor * Fix lint * Improve testing performance * Remove unused file, Clear processors from the UI before job runs * Update the installer * Uniform set handler for swapper and detector in the UI * Fix example urls * Feat/inference manager (#684) * Introduce inference manager * Migrate all to inference manager * clean ini * Introduce app context based inference pools * Fix lint * Fix typing * Adjust layout * Less border radius * Rename app context names * Fix/live portrait directml (#691) * changes (#690) * Adjust naming * Use our assets release * Adjust naming --------- Co-authored-by: Harisreedhar <46858047+harisreedhar@users.noreply.github.com> * Add caches to gitignore * Update dependencies and drop CUDA 11.8 support (#693) * Update dependencies and drop CUDA 11.8 support * Play save and keep numpy 1.x.x * Improve TensorRT optimization * changes * changes * changes * changes * changes * changes * changes * changes * changes * Reuse inference sessions (#696) * Fix force-download command * Refactor processors to forward() (#698) * Install tensorrt when selecting cuda * Minor changes * Use latest numpy * Fix limit system memory * Implement forward() for every inference (#699) * Implement forward() for every inference * Implement forward() for every inference * Implement forward() for every inference * Implement forward() for every inference * changes * changes * changes * changes * Feat/fairface (#710) * Replace gender_age model with fair face (#709) * changes * changes * changes * age dropdown to range-slider * Cleanup code * Cleanup code --------- Co-authored-by: Harisreedhar <46858047+harisreedhar@users.noreply.github.com> * Extend installer to set library paths for cuda and tensorrt (#707) * Extend installer to set library paths for cuda and tensorrt * Add refresh of conda env * Remove invalid commands * Set the conda env according to operating system * Update for ROCm 6.2 * fix installer * Aktualisieren von installer.py * Add missing face selector keys * Try to keep original LD_LIBRARY_PATH * windows support installer * Final touch to the installer * Remove spaces * Simplidy collect_model_downloads() * Fix force download for once and forever * Housekeeping (#715) * changes * changes * changes * Fix performance part1 * Fix mixed states (#689) * Fix mixed states * Add missing sync for job args * Move UnionStateXXX to base typing * Undo * Remove UnionStateXXX * Fix app context performance lookup (#717) * Restore performance for inswapper * Mover upper() to the logger * Undo debugging * Move TensorRT installation to docs * Sort out log level typing, Add log level UI dropdown (#719) * Fix inference pool part1 * Validate conda library paths existence * Default face selector order to large-small * Fix inference pool context according to execution provider (#720) * Fix app context under Windows * CUDA and TensorRT update for the installer * Remove concept of static processor modules * Revert false commit * Change event order makes a difference * Fix multi model context in inference pool (#721) * Fix multi model context in inference pool * Fix multi model context in inference pool part2 * Use latest gradio to avoid fastapi bug * Rework on the Windows Installer * Use embedding converter (#724) * changes (#723) * Upload models to official assets repo --------- Co-authored-by: Harisreedhar <46858047+harisreedhar@users.noreply.github.com> * Rework on the Windows Installer part2 * Resolve subprocess calls (#726) * Experiment * Resolve subprocess calls to cover edge cases like broken PATH * Adjust wording * Simplify code * Rework on the Windows Installer part3 * Rework on the Windows Installer part4 * Numpy fix for older onnxruntime * changes (#729) * Add space * Add MacOS installer * Use favicon * Fix disabled logger * Layout polishing (#731) * Update dependencies, Adjust many face landmarker logic * Cosmetics changes * Should be button * Introduce randomized action button * Fix update of lip syncer and expression restorer * Stop sharing inference session this prevents flushing VRAM * Fix test * Fix urls * Prepare release * Vanish inquirer * Sticky preview does not work on portrait images * Sticky preview only for landscape images and videos * remove gradio tunnel env * Change wording and deeplinks * increase peppa landmark score offset * Change wording * Graceful exit install.py * Just adding a required * Cannot use the exit_helper * Rename our model * Change color of face-landmark-68/5 * Limit liveportrait (#739) * changes * changes * changes * Cleanup * Cleanup --------- Co-authored-by: harisreedhar <h4harisreedhar.s.s@gmail.com> Co-authored-by: Harisreedhar <46858047+harisreedhar@users.noreply.github.com> * limit expression restorer * change expression restorer 0-100 range * Use 256x icon * changes * changes * changes * changes * Limit face editor rotation (#745) * changes (#743) * Finish euler methods --------- Co-authored-by: Harisreedhar <46858047+harisreedhar@users.noreply.github.com> * Use different coveralls badge * Move about wording * Shorten scope in the logger * changes * changes * Shorten scope in the logger * fix typo * Simplify the arcface converter names * Update preview --------- Co-authored-by: Harisreedhar <46858047+harisreedhar@users.noreply.github.com> Co-authored-by: harisreedhar <h4harisreedhar.s.s@gmail.com>
446 lines
20 KiB
Python
Executable File
446 lines
20 KiB
Python
Executable File
import shutil
|
|
import signal
|
|
import sys
|
|
from time import time
|
|
|
|
import numpy
|
|
|
|
from facefusion import content_analyser, face_classifier, face_detector, face_landmarker, face_masker, face_recognizer, logger, process_manager, state_manager, voice_extractor, wording
|
|
from facefusion.args import apply_args, collect_job_args, reduce_step_args
|
|
from facefusion.common_helper import get_first
|
|
from facefusion.content_analyser import analyse_image, analyse_video
|
|
from facefusion.download import conditional_download_hashes, conditional_download_sources
|
|
from facefusion.exit_helper import conditional_exit, graceful_exit, hard_exit
|
|
from facefusion.face_analyser import get_average_face, get_many_faces, get_one_face
|
|
from facefusion.face_selector import sort_and_filter_faces
|
|
from facefusion.face_store import append_reference_face, clear_reference_faces, get_reference_faces
|
|
from facefusion.ffmpeg import copy_image, extract_frames, finalize_image, merge_video, replace_audio, restore_audio
|
|
from facefusion.filesystem import filter_audio_paths, is_image, is_video, list_directory, resolve_relative_path
|
|
from facefusion.jobs import job_helper, job_manager, job_runner
|
|
from facefusion.jobs.job_list import compose_job_list
|
|
from facefusion.memory import limit_system_memory
|
|
from facefusion.processors.core import get_processors_modules
|
|
from facefusion.program import create_program
|
|
from facefusion.program_helper import validate_args
|
|
from facefusion.statistics import conditional_log_statistics
|
|
from facefusion.temp_helper import clear_temp_directory, create_temp_directory, get_temp_file_path, get_temp_frame_paths, move_temp_file
|
|
from facefusion.typing import Args, ErrorCode
|
|
from facefusion.vision import get_video_frame, pack_resolution, read_image, read_static_images, restrict_image_resolution, restrict_video_fps, restrict_video_resolution, unpack_resolution
|
|
|
|
|
|
def cli() -> None:
|
|
signal.signal(signal.SIGINT, lambda signal_number, frame: graceful_exit(0))
|
|
program = create_program()
|
|
|
|
if validate_args(program):
|
|
args = vars(program.parse_args())
|
|
apply_args(args, state_manager.init_item)
|
|
|
|
if state_manager.get_item('command'):
|
|
logger.init(state_manager.get_item('log_level'))
|
|
route(args)
|
|
else:
|
|
program.print_help()
|
|
|
|
|
|
def route(args : Args) -> None:
|
|
system_memory_limit = state_manager.get_item('system_memory_limit')
|
|
if system_memory_limit and system_memory_limit > 0:
|
|
limit_system_memory(system_memory_limit)
|
|
if state_manager.get_item('command') == 'force-download':
|
|
error_code = force_download()
|
|
return conditional_exit(error_code)
|
|
if state_manager.get_item('command') in [ 'job-create', 'job-submit', 'job-submit-all', 'job-delete', 'job-delete-all', 'job-add-step', 'job-remix-step', 'job-insert-step', 'job-remove-step', 'job-list' ]:
|
|
if not job_manager.init_jobs(state_manager.get_item('jobs_path')):
|
|
hard_exit(1)
|
|
error_code = route_job_manager(args)
|
|
hard_exit(error_code)
|
|
if not pre_check():
|
|
return conditional_exit(2)
|
|
if state_manager.get_item('command') == 'run':
|
|
import facefusion.uis.core as ui
|
|
|
|
if not common_pre_check() or not processors_pre_check():
|
|
return conditional_exit(2)
|
|
for ui_layout in ui.get_ui_layouts_modules(state_manager.get_item('ui_layouts')):
|
|
if not ui_layout.pre_check():
|
|
return conditional_exit(2)
|
|
ui.launch()
|
|
if state_manager.get_item('command') == 'headless-run':
|
|
if not job_manager.init_jobs(state_manager.get_item('jobs_path')):
|
|
hard_exit(1)
|
|
error_core = process_headless(args)
|
|
hard_exit(error_core)
|
|
if state_manager.get_item('command') in [ 'job-run', 'job-run-all', 'job-retry', 'job-retry-all' ]:
|
|
if not job_manager.init_jobs(state_manager.get_item('jobs_path')):
|
|
hard_exit(1)
|
|
error_code = route_job_runner()
|
|
hard_exit(error_code)
|
|
|
|
|
|
def pre_check() -> bool:
|
|
if sys.version_info < (3, 9):
|
|
logger.error(wording.get('python_not_supported').format(version = '3.9'), __name__)
|
|
return False
|
|
if not shutil.which('curl'):
|
|
logger.error(wording.get('curl_not_installed'), __name__)
|
|
return False
|
|
if not shutil.which('ffmpeg'):
|
|
logger.error(wording.get('ffmpeg_not_installed'), __name__)
|
|
return False
|
|
return True
|
|
|
|
|
|
def common_pre_check() -> bool:
|
|
modules =\
|
|
[
|
|
content_analyser,
|
|
face_classifier,
|
|
face_detector,
|
|
face_landmarker,
|
|
face_masker,
|
|
face_recognizer,
|
|
voice_extractor
|
|
]
|
|
|
|
return all(module.pre_check() for module in modules)
|
|
|
|
|
|
def processors_pre_check() -> bool:
|
|
for processor_module in get_processors_modules(state_manager.get_item('processors')):
|
|
if not processor_module.pre_check():
|
|
return False
|
|
return True
|
|
|
|
|
|
def conditional_process() -> ErrorCode:
|
|
start_time = time()
|
|
for processor_module in get_processors_modules(state_manager.get_item('processors')):
|
|
if not processor_module.pre_process('output'):
|
|
return 2
|
|
conditional_append_reference_faces()
|
|
if is_image(state_manager.get_item('target_path')):
|
|
return process_image(start_time)
|
|
if is_video(state_manager.get_item('target_path')):
|
|
return process_video(start_time)
|
|
return 0
|
|
|
|
|
|
def conditional_append_reference_faces() -> None:
|
|
if 'reference' in state_manager.get_item('face_selector_mode') and not get_reference_faces():
|
|
source_frames = read_static_images(state_manager.get_item('source_paths'))
|
|
source_faces = get_many_faces(source_frames)
|
|
source_face = get_average_face(source_faces)
|
|
if is_video(state_manager.get_item('target_path')):
|
|
reference_frame = get_video_frame(state_manager.get_item('target_path'), state_manager.get_item('reference_frame_number'))
|
|
else:
|
|
reference_frame = read_image(state_manager.get_item('target_path'))
|
|
reference_faces = sort_and_filter_faces(get_many_faces([ reference_frame ]))
|
|
reference_face = get_one_face(reference_faces, state_manager.get_item('reference_face_position'))
|
|
append_reference_face('origin', reference_face)
|
|
|
|
if source_face and reference_face:
|
|
for processor_module in get_processors_modules(state_manager.get_item('processors')):
|
|
abstract_reference_frame = processor_module.get_reference_frame(source_face, reference_face, reference_frame)
|
|
if numpy.any(abstract_reference_frame):
|
|
abstract_reference_faces = sort_and_filter_faces(get_many_faces([ abstract_reference_frame ]))
|
|
abstract_reference_face = get_one_face(abstract_reference_faces, state_manager.get_item('reference_face_position'))
|
|
append_reference_face(processor_module.__name__, abstract_reference_face)
|
|
|
|
|
|
def force_download() -> ErrorCode:
|
|
download_directory_path = resolve_relative_path('../.assets/models')
|
|
available_processors = list_directory('facefusion/processors/modules')
|
|
common_modules =\
|
|
[
|
|
content_analyser,
|
|
face_classifier,
|
|
face_detector,
|
|
face_landmarker,
|
|
face_recognizer,
|
|
face_masker,
|
|
voice_extractor
|
|
]
|
|
processor_modules = get_processors_modules(available_processors)
|
|
|
|
for module in common_modules + processor_modules:
|
|
if hasattr(module, 'MODEL_SET'):
|
|
for model in module.MODEL_SET.values():
|
|
model_hashes = model.get('hashes')
|
|
model_sources = model.get('sources')
|
|
|
|
if model_hashes and model_sources:
|
|
if not conditional_download_hashes(download_directory_path, model_hashes) or not conditional_download_sources(download_directory_path, model_sources):
|
|
return 1
|
|
|
|
return 0
|
|
|
|
|
|
def route_job_manager(args : Args) -> ErrorCode:
|
|
if state_manager.get_item('command') == 'job-create':
|
|
if job_manager.create_job(state_manager.get_item('job_id')):
|
|
logger.info(wording.get('job_created').format(job_id = state_manager.get_item('job_id')), __name__)
|
|
return 0
|
|
logger.error(wording.get('job_not_created').format(job_id = state_manager.get_item('job_id')), __name__)
|
|
return 1
|
|
if state_manager.get_item('command') == 'job-submit':
|
|
if job_manager.submit_job(state_manager.get_item('job_id')):
|
|
logger.info(wording.get('job_submitted').format(job_id = state_manager.get_item('job_id')), __name__)
|
|
return 0
|
|
logger.error(wording.get('job_not_submitted').format(job_id = state_manager.get_item('job_id')), __name__)
|
|
return 1
|
|
if state_manager.get_item('command') == 'job-submit-all':
|
|
if job_manager.submit_jobs():
|
|
logger.info(wording.get('job_all_submitted'), __name__)
|
|
return 0
|
|
logger.error(wording.get('job_all_not_submitted'), __name__)
|
|
return 1
|
|
if state_manager.get_item('command') == 'job-delete':
|
|
if job_manager.delete_job(state_manager.get_item('job_id')):
|
|
logger.info(wording.get('job_deleted').format(job_id = state_manager.get_item('job_id')), __name__)
|
|
return 0
|
|
logger.error(wording.get('job_not_deleted').format(job_id = state_manager.get_item('job_id')), __name__)
|
|
return 1
|
|
if state_manager.get_item('command') == 'job-delete-all':
|
|
if job_manager.delete_jobs():
|
|
logger.info(wording.get('job_all_deleted'), __name__)
|
|
return 0
|
|
logger.error(wording.get('job_all_not_deleted'), __name__)
|
|
return 1
|
|
if state_manager.get_item('command') == 'job-list':
|
|
job_headers, job_contents = compose_job_list(state_manager.get_item('job_status'))
|
|
|
|
if job_contents:
|
|
logger.table(job_headers, job_contents)
|
|
return 0
|
|
return 1
|
|
if state_manager.get_item('command') == 'job-add-step':
|
|
step_args = reduce_step_args(args)
|
|
|
|
if job_manager.add_step(state_manager.get_item('job_id'), step_args):
|
|
logger.info(wording.get('job_step_added').format(job_id = state_manager.get_item('job_id')), __name__)
|
|
return 0
|
|
logger.error(wording.get('job_step_not_added').format(job_id = state_manager.get_item('job_id')), __name__)
|
|
return 1
|
|
if state_manager.get_item('command') == 'job-remix-step':
|
|
step_args = reduce_step_args(args)
|
|
|
|
if job_manager.remix_step(state_manager.get_item('job_id'), state_manager.get_item('step_index'), step_args):
|
|
logger.info(wording.get('job_remix_step_added').format(job_id = state_manager.get_item('job_id'), step_index = state_manager.get_item('step_index')), __name__)
|
|
return 0
|
|
logger.error(wording.get('job_remix_step_not_added').format(job_id = state_manager.get_item('job_id'), step_index = state_manager.get_item('step_index')), __name__)
|
|
return 1
|
|
if state_manager.get_item('command') == 'job-insert-step':
|
|
step_args = reduce_step_args(args)
|
|
|
|
if job_manager.insert_step(state_manager.get_item('job_id'), state_manager.get_item('step_index'), step_args):
|
|
logger.info(wording.get('job_step_inserted').format(job_id = state_manager.get_item('job_id'), step_index = state_manager.get_item('step_index')), __name__)
|
|
return 0
|
|
logger.error(wording.get('job_step_not_inserted').format(job_id = state_manager.get_item('job_id'), step_index = state_manager.get_item('step_index')), __name__)
|
|
return 1
|
|
if state_manager.get_item('command') == 'job-remove-step':
|
|
if job_manager.remove_step(state_manager.get_item('job_id'), state_manager.get_item('step_index')):
|
|
logger.info(wording.get('job_step_removed').format(job_id = state_manager.get_item('job_id'), step_index = state_manager.get_item('step_index')), __name__)
|
|
return 0
|
|
logger.error(wording.get('job_step_not_removed').format(job_id = state_manager.get_item('job_id'), step_index = state_manager.get_item('step_index')), __name__)
|
|
return 1
|
|
return 1
|
|
|
|
|
|
def route_job_runner() -> ErrorCode:
|
|
if state_manager.get_item('command') == 'job-run':
|
|
logger.info(wording.get('running_job').format(job_id = state_manager.get_item('job_id')), __name__)
|
|
if job_runner.run_job(state_manager.get_item('job_id'), process_step):
|
|
logger.info(wording.get('processing_job_succeed').format(job_id = state_manager.get_item('job_id')), __name__)
|
|
return 0
|
|
logger.info(wording.get('processing_job_failed').format(job_id = state_manager.get_item('job_id')), __name__)
|
|
return 1
|
|
if state_manager.get_item('command') == 'job-run-all':
|
|
logger.info(wording.get('running_jobs'), __name__)
|
|
if job_runner.run_jobs(process_step):
|
|
logger.info(wording.get('processing_jobs_succeed'), __name__)
|
|
return 0
|
|
logger.info(wording.get('processing_jobs_failed'), __name__)
|
|
return 1
|
|
if state_manager.get_item('command') == 'job-retry':
|
|
logger.info(wording.get('retrying_job').format(job_id = state_manager.get_item('job_id')), __name__)
|
|
if job_runner.retry_job(state_manager.get_item('job_id'), process_step):
|
|
logger.info(wording.get('processing_job_succeed').format(job_id = state_manager.get_item('job_id')), __name__)
|
|
return 0
|
|
logger.info(wording.get('processing_job_failed').format(job_id = state_manager.get_item('job_id')), __name__)
|
|
return 1
|
|
if state_manager.get_item('command') == 'job-retry-all':
|
|
logger.info(wording.get('retrying_jobs'), __name__)
|
|
if job_runner.retry_jobs(process_step):
|
|
logger.info(wording.get('processing_jobs_succeed'), __name__)
|
|
return 0
|
|
logger.info(wording.get('processing_jobs_failed'), __name__)
|
|
return 1
|
|
return 2
|
|
|
|
|
|
def process_step(job_id : str, step_index : int, step_args : Args) -> bool:
|
|
clear_reference_faces()
|
|
step_total = job_manager.count_step_total(job_id)
|
|
step_args.update(collect_job_args())
|
|
apply_args(step_args, state_manager.set_item)
|
|
|
|
logger.info(wording.get('processing_step').format(step_current = step_index + 1, step_total = step_total), __name__)
|
|
if common_pre_check() and processors_pre_check():
|
|
error_code = conditional_process()
|
|
return error_code == 0
|
|
return False
|
|
|
|
|
|
def process_headless(args : Args) -> ErrorCode:
|
|
job_id = job_helper.suggest_job_id('headless')
|
|
step_args = reduce_step_args(args)
|
|
|
|
if job_manager.create_job(job_id) and job_manager.add_step(job_id, step_args) and job_manager.submit_job(job_id) and job_runner.run_job(job_id, process_step):
|
|
return 0
|
|
return 1
|
|
|
|
|
|
def process_image(start_time : float) -> ErrorCode:
|
|
if analyse_image(state_manager.get_item('target_path')):
|
|
return 3
|
|
# clear temp
|
|
logger.debug(wording.get('clearing_temp'), __name__)
|
|
clear_temp_directory(state_manager.get_item('target_path'))
|
|
# create temp
|
|
logger.debug(wording.get('creating_temp'), __name__)
|
|
create_temp_directory(state_manager.get_item('target_path'))
|
|
# copy image
|
|
process_manager.start()
|
|
temp_image_resolution = pack_resolution(restrict_image_resolution(state_manager.get_item('target_path'), unpack_resolution(state_manager.get_item('output_image_resolution'))))
|
|
logger.info(wording.get('copying_image').format(resolution = temp_image_resolution), __name__)
|
|
if copy_image(state_manager.get_item('target_path'), temp_image_resolution):
|
|
logger.debug(wording.get('copying_image_succeed'), __name__)
|
|
else:
|
|
logger.error(wording.get('copying_image_failed'), __name__)
|
|
process_manager.end()
|
|
return 1
|
|
# process image
|
|
temp_file_path = get_temp_file_path(state_manager.get_item('target_path'))
|
|
for processor_module in get_processors_modules(state_manager.get_item('processors')):
|
|
logger.info(wording.get('processing'), processor_module.__name__)
|
|
processor_module.process_image(state_manager.get_item('source_paths'), temp_file_path, temp_file_path)
|
|
processor_module.post_process()
|
|
if is_process_stopping():
|
|
process_manager.end()
|
|
return 4
|
|
# finalize image
|
|
logger.info(wording.get('finalizing_image').format(resolution = state_manager.get_item('output_image_resolution')), __name__)
|
|
if finalize_image(state_manager.get_item('target_path'), state_manager.get_item('output_path'), state_manager.get_item('output_image_resolution')):
|
|
logger.debug(wording.get('finalizing_image_succeed'), __name__)
|
|
else:
|
|
logger.warn(wording.get('finalizing_image_skipped'), __name__)
|
|
# clear temp
|
|
logger.debug(wording.get('clearing_temp'), __name__)
|
|
clear_temp_directory(state_manager.get_item('target_path'))
|
|
# validate image
|
|
if is_image(state_manager.get_item('output_path')):
|
|
seconds = '{:.2f}'.format((time() - start_time) % 60)
|
|
logger.info(wording.get('processing_image_succeed').format(seconds = seconds), __name__)
|
|
conditional_log_statistics()
|
|
else:
|
|
logger.error(wording.get('processing_image_failed'), __name__)
|
|
process_manager.end()
|
|
return 1
|
|
process_manager.end()
|
|
return 0
|
|
|
|
|
|
def process_video(start_time : float) -> ErrorCode:
|
|
if analyse_video(state_manager.get_item('target_path'), state_manager.get_item('trim_frame_start'), state_manager.get_item('trim_frame_end')):
|
|
return 3
|
|
# clear temp
|
|
logger.debug(wording.get('clearing_temp'), __name__)
|
|
clear_temp_directory(state_manager.get_item('target_path'))
|
|
# create temp
|
|
logger.debug(wording.get('creating_temp'), __name__)
|
|
create_temp_directory(state_manager.get_item('target_path'))
|
|
# extract frames
|
|
process_manager.start()
|
|
temp_video_resolution = pack_resolution(restrict_video_resolution(state_manager.get_item('target_path'), unpack_resolution(state_manager.get_item('output_video_resolution'))))
|
|
temp_video_fps = restrict_video_fps(state_manager.get_item('target_path'), state_manager.get_item('output_video_fps'))
|
|
logger.info(wording.get('extracting_frames').format(resolution = temp_video_resolution, fps = temp_video_fps), __name__)
|
|
if extract_frames(state_manager.get_item('target_path'), temp_video_resolution, temp_video_fps):
|
|
logger.debug(wording.get('extracting_frames_succeed'), __name__)
|
|
else:
|
|
if is_process_stopping():
|
|
process_manager.end()
|
|
return 4
|
|
logger.error(wording.get('extracting_frames_failed'), __name__)
|
|
process_manager.end()
|
|
return 1
|
|
# process frames
|
|
temp_frame_paths = get_temp_frame_paths(state_manager.get_item('target_path'))
|
|
if temp_frame_paths:
|
|
for processor_module in get_processors_modules(state_manager.get_item('processors')):
|
|
logger.info(wording.get('processing'), processor_module.__name__)
|
|
processor_module.process_video(state_manager.get_item('source_paths'), temp_frame_paths)
|
|
processor_module.post_process()
|
|
if is_process_stopping():
|
|
return 4
|
|
else:
|
|
logger.error(wording.get('temp_frames_not_found'), __name__)
|
|
process_manager.end()
|
|
return 1
|
|
# merge video
|
|
logger.info(wording.get('merging_video').format(resolution = state_manager.get_item('output_video_resolution'), fps = state_manager.get_item('output_video_fps')), __name__)
|
|
if merge_video(state_manager.get_item('target_path'), state_manager.get_item('output_video_resolution'), state_manager.get_item('output_video_fps')):
|
|
logger.debug(wording.get('merging_video_succeed'), __name__)
|
|
else:
|
|
if is_process_stopping():
|
|
process_manager.end()
|
|
return 4
|
|
logger.error(wording.get('merging_video_failed'), __name__)
|
|
process_manager.end()
|
|
return 1
|
|
# handle audio
|
|
if state_manager.get_item('skip_audio'):
|
|
logger.info(wording.get('skipping_audio'), __name__)
|
|
move_temp_file(state_manager.get_item('target_path'), state_manager.get_item('output_path'))
|
|
else:
|
|
if 'lip_syncer' in state_manager.get_item('processors'):
|
|
source_audio_path = get_first(filter_audio_paths(state_manager.get_item('source_paths')))
|
|
if source_audio_path and replace_audio(state_manager.get_item('target_path'), source_audio_path, state_manager.get_item('output_path')):
|
|
logger.debug(wording.get('restoring_audio_succeed'), __name__)
|
|
else:
|
|
if is_process_stopping():
|
|
process_manager.end()
|
|
return 4
|
|
logger.warn(wording.get('restoring_audio_skipped'), __name__)
|
|
move_temp_file(state_manager.get_item('target_path'), state_manager.get_item('output_path'))
|
|
else:
|
|
if restore_audio(state_manager.get_item('target_path'), state_manager.get_item('output_path'), state_manager.get_item('output_video_fps')):
|
|
logger.debug(wording.get('restoring_audio_succeed'), __name__)
|
|
else:
|
|
if is_process_stopping():
|
|
process_manager.end()
|
|
return 4
|
|
logger.warn(wording.get('restoring_audio_skipped'), __name__)
|
|
move_temp_file(state_manager.get_item('target_path'), state_manager.get_item('output_path'))
|
|
# clear temp
|
|
logger.debug(wording.get('clearing_temp'), __name__)
|
|
clear_temp_directory(state_manager.get_item('target_path'))
|
|
# validate video
|
|
if is_video(state_manager.get_item('output_path')):
|
|
seconds = '{:.2f}'.format((time() - start_time))
|
|
logger.info(wording.get('processing_video_succeed').format(seconds = seconds), __name__)
|
|
conditional_log_statistics()
|
|
else:
|
|
logger.error(wording.get('processing_video_failed'), __name__)
|
|
process_manager.end()
|
|
return 1
|
|
process_manager.end()
|
|
return 0
|
|
|
|
|
|
def is_process_stopping() -> bool:
|
|
if process_manager.is_stopping():
|
|
process_manager.end()
|
|
logger.info(wording.get('processing_stopped'), __name__)
|
|
return process_manager.is_pending()
|