
* feat/yoloface (#334)
* added yolov8 to face_detector (#323)
* added yolov8 to face_detector
* added yolov8 to face_detector
* Initial cleanup and renaming
* Update README
* refactored detect_with_yoloface (#329)
* refactored detect_with_yoloface
* apply review
* Change order again
* Restore working code
* modified code (#330)
* refactored detect_with_yoloface
* apply review
* use temp_frame in detect_with_yoloface
* reorder
* modified
* reorder models
* Tiny cleanup
---------
Co-authored-by: tamoharu <133945583+tamoharu@users.noreply.github.com>
* include audio file functions (#336)
* Add testing for audio handlers
* Change order
* Fix naming
* Use correct typing in choices
* Update help message for arguments, Notation based wording approach (#347)
* Update help message for arguments, Notation based wording approach
* Fix installer
* Audio functions (#345)
* Update ffmpeg.py
* Create audio.py
* Update ffmpeg.py
* Update audio.py
* Update audio.py
* Update typing.py
* Update ffmpeg.py
* Update audio.py
* Rename Frame to VisionFrame (#346)
* Minor tidy up
* Introduce audio testing
* Add more todo for testing
* Add more todo for testing
* Fix indent
* Enable venv on the fly
* Enable venv on the fly
* Revert venv on the fly
* Revert venv on the fly
* Force Gradio to shut up
* Force Gradio to shut up
* Clear temp before processing
* Reduce terminal output
* include audio file functions
* Enforce output resolution on merge video
* Minor cleanups
* Add age and gender to face debugger items (#353)
* Add age and gender to face debugger items
* Rename like suggested in the code review
* Fix the output framerate vs. time
* Lip Sync (#356)
* Cli implementation of wav2lip
* - create get_first_item()
- remove non gan wav2lip model
- implement video memory strategy
- implement get_reference_frame()
- implement process_image()
- rearrange crop_mask_list
- implement test_cli
* Simplify testing
* Rename to lip syncer
* Fix testing
* Fix testing
* Minor cleanup
* Cuda 12 installer (#362)
* Make cuda nightly (12) the default
* Better keep legacy cuda just in case
* Use CUDA and ROCM versions
* Remove MacOS options from installer (CoreML include in default package)
* Add lip-syncer support to source component
* Add lip-syncer support to source component
* Fix the check in the source component
* Add target image check
* Introduce more helpers to suite the lip-syncer needs
* Downgrade onnxruntime as of buggy 1.17.0 release
* Revert "Downgrade onnxruntime as of buggy 1.17.0 release"
This reverts commit f4a7ae6824
.
* More testing and add todos
* Fix the frame processor API to at least not throw errors
* Introduce dict based frame processor inputs (#364)
* Introduce dict based frame processor inputs
* Forgot to adjust webcam
* create path payloads (#365)
* create index payload to paths for process_frames
* rename to payload_paths
* This code now is poetry
* Fix the terminal output
* Make lip-syncer work in the preview
* Remove face debugger test for now
* Reoder reference_faces, Fix testing
* Use inswapper_128 on buggy onnxruntime 1.17.0
* Undo inswapper_128_fp16 duo broken onnxruntime 1.17.0
* Undo inswapper_128_fp16 duo broken onnxruntime 1.17.0
* Fix lip_syncer occluder & region mask issue
* Fix preview once in case there was no output video fps
* fix lip_syncer custom fps
* remove unused import
* Add 68 landmark functions (#367)
* Add 68 landmark model
* Add landmark to face object
* Re-arrange and modify typing
* Rename function
* Rearrange
* Rearrange
* ignore type
* ignore type
* change type
* ignore
* name
* Some cleanup
* Some cleanup
* Opps, I broke something
* Feat/face analyser refactoring (#369)
* Restructure face analyser and start TDD
* YoloFace and Yunet testing are passing
* Remove offset from yoloface detection
* Cleanup code
* Tiny fix
* Fix get_many_faces()
* Tiny fix (again)
* Use 320x320 fallback for retinaface
* Fix merging mashup
* Upload wave2lip model
* Upload 2dfan2 model and rename internal to face_predictor
* Downgrade onnxruntime for most cases
* Update for the face debugger to render landmark 68
* Try to make detect_face_landmark_68() and detect_gender_age() more uniform
* Enable retinaface testing for 320x320
* Make detect_face_landmark_68() and detect_gender_age() as uniform as … (#370)
* Make detect_face_landmark_68() and detect_gender_age() as uniform as possible
* Revert landmark scale and translation
* Make box-mask for lip-syncer adjustable
* Add create_bbox_from_landmark()
* Remove currently unused code
* Feat/uniface (#375)
* add uniface (#373)
* Finalize UniFace implementation
---------
Co-authored-by: Harisreedhar <46858047+harisreedhar@users.noreply.github.com>
* My approach how todo it
* edit
* edit
* replace vertical blur with gaussian
* remove region mask
* Rebase against next and restore method
* Minor improvements
* Minor improvements
* rename & add forehead padding
* Adjust and host uniface model
* Use 2dfan4 model
* Rename to face landmarker
* Feat/replace bbox with bounding box (#380)
* Add landmark 68 to 5 convertion
* Add landmark 68 to 5 convertion
* Keep 5, 5/68 and 68 landmarks
* Replace kps with landmark
* Replace bbox with bounding box
* Reshape face_landmark5_list different
* Make yoloface the default
* Move convert_face_landmark_68_to_5 to face_helper
* Minor spacing issue
* Dynamic detector sizes according to model (#382)
* Dynamic detector sizes according to model
* Dynamic detector sizes according to model
* Undo false commited files
* Add lib syncer model to the UI
* fix halo (#383)
* Bump to 2.3.0
* Update README and wording
* Update README and wording
* Fix spacing
* Apply _vision suffix
* Apply _vision suffix
* Apply _vision suffix
* Apply _vision suffix
* Apply _vision suffix
* Apply _vision suffix
* Apply _vision suffix, Move mouth mask to face_masker.py
* Apply _vision suffix
* Apply _vision suffix
* increase forehead padding
---------
Co-authored-by: tamoharu <133945583+tamoharu@users.noreply.github.com>
Co-authored-by: Harisreedhar <46858047+harisreedhar@users.noreply.github.com>
141 lines
5.0 KiB
Python
Executable File
141 lines
5.0 KiB
Python
Executable File
from typing import Any, Dict, List
|
|
from cv2.typing import Size
|
|
from functools import lru_cache
|
|
import threading
|
|
import cv2
|
|
import numpy
|
|
import onnxruntime
|
|
|
|
import facefusion.globals
|
|
from facefusion.typing import FaceLandmark68, VisionFrame, Mask, Padding, FaceMaskRegion, ModelSet
|
|
from facefusion.execution_helper import apply_execution_provider_options
|
|
from facefusion.filesystem import resolve_relative_path
|
|
from facefusion.download import conditional_download
|
|
|
|
FACE_OCCLUDER = None
|
|
FACE_PARSER = None
|
|
THREAD_LOCK : threading.Lock = threading.Lock()
|
|
MODELS : ModelSet =\
|
|
{
|
|
'face_occluder':
|
|
{
|
|
'url': 'https://github.com/facefusion/facefusion-assets/releases/download/models/face_occluder.onnx',
|
|
'path': resolve_relative_path('../.assets/models/face_occluder.onnx')
|
|
},
|
|
'face_parser':
|
|
{
|
|
'url': 'https://github.com/facefusion/facefusion-assets/releases/download/models/face_parser.onnx',
|
|
'path': resolve_relative_path('../.assets/models/face_parser.onnx')
|
|
}
|
|
}
|
|
FACE_MASK_REGIONS : Dict[FaceMaskRegion, int] =\
|
|
{
|
|
'skin': 1,
|
|
'left-eyebrow': 2,
|
|
'right-eyebrow': 3,
|
|
'left-eye': 4,
|
|
'right-eye': 5,
|
|
'eye-glasses': 6,
|
|
'nose': 10,
|
|
'mouth': 11,
|
|
'upper-lip': 12,
|
|
'lower-lip': 13
|
|
}
|
|
|
|
|
|
def get_face_occluder() -> Any:
|
|
global FACE_OCCLUDER
|
|
|
|
with THREAD_LOCK:
|
|
if FACE_OCCLUDER is None:
|
|
model_path = MODELS.get('face_occluder').get('path')
|
|
FACE_OCCLUDER = onnxruntime.InferenceSession(model_path, providers = apply_execution_provider_options(facefusion.globals.execution_providers))
|
|
return FACE_OCCLUDER
|
|
|
|
|
|
def get_face_parser() -> Any:
|
|
global FACE_PARSER
|
|
|
|
with THREAD_LOCK:
|
|
if FACE_PARSER is None:
|
|
model_path = MODELS.get('face_parser').get('path')
|
|
FACE_PARSER = onnxruntime.InferenceSession(model_path, providers = apply_execution_provider_options(facefusion.globals.execution_providers))
|
|
return FACE_PARSER
|
|
|
|
|
|
def clear_face_occluder() -> None:
|
|
global FACE_OCCLUDER
|
|
|
|
FACE_OCCLUDER = None
|
|
|
|
|
|
def clear_face_parser() -> None:
|
|
global FACE_PARSER
|
|
|
|
FACE_PARSER = None
|
|
|
|
|
|
def pre_check() -> bool:
|
|
if not facefusion.globals.skip_download:
|
|
download_directory_path = resolve_relative_path('../.assets/models')
|
|
model_urls =\
|
|
[
|
|
MODELS.get('face_occluder').get('url'),
|
|
MODELS.get('face_parser').get('url'),
|
|
]
|
|
conditional_download(download_directory_path, model_urls)
|
|
return True
|
|
|
|
|
|
@lru_cache(maxsize = None)
|
|
def create_static_box_mask(crop_size : Size, face_mask_blur : float, face_mask_padding : Padding) -> Mask:
|
|
blur_amount = int(crop_size[0] * 0.5 * face_mask_blur)
|
|
blur_area = max(blur_amount // 2, 1)
|
|
box_mask : Mask = numpy.ones(crop_size, numpy.float32)
|
|
box_mask[:max(blur_area, int(crop_size[1] * face_mask_padding[0] / 100)), :] = 0
|
|
box_mask[-max(blur_area, int(crop_size[1] * face_mask_padding[2] / 100)):, :] = 0
|
|
box_mask[:, :max(blur_area, int(crop_size[0] * face_mask_padding[3] / 100))] = 0
|
|
box_mask[:, -max(blur_area, int(crop_size[0] * face_mask_padding[1] / 100)):] = 0
|
|
if blur_amount > 0:
|
|
box_mask = cv2.GaussianBlur(box_mask, (0, 0), blur_amount * 0.25)
|
|
return box_mask
|
|
|
|
|
|
def create_occlusion_mask(crop_vision_frame : VisionFrame) -> Mask:
|
|
face_occluder = get_face_occluder()
|
|
prepare_vision_frame = cv2.resize(crop_vision_frame, face_occluder.get_inputs()[0].shape[1:3][::-1])
|
|
prepare_vision_frame = numpy.expand_dims(prepare_vision_frame, axis = 0).astype(numpy.float32) / 255
|
|
prepare_vision_frame = prepare_vision_frame.transpose(0, 1, 2, 3)
|
|
occlusion_mask : Mask = face_occluder.run(None,
|
|
{
|
|
face_occluder.get_inputs()[0].name: prepare_vision_frame
|
|
})[0][0]
|
|
occlusion_mask = occlusion_mask.transpose(0, 1, 2).clip(0, 1).astype(numpy.float32)
|
|
occlusion_mask = cv2.resize(occlusion_mask, crop_vision_frame.shape[:2][::-1])
|
|
occlusion_mask = (cv2.GaussianBlur(occlusion_mask.clip(0, 1), (0, 0), 5).clip(0.5, 1) - 0.5) * 2
|
|
return occlusion_mask
|
|
|
|
|
|
def create_region_mask(crop_vision_frame : VisionFrame, face_mask_regions : List[FaceMaskRegion]) -> Mask:
|
|
face_parser = get_face_parser()
|
|
prepare_vision_frame = cv2.flip(cv2.resize(crop_vision_frame, (512, 512)), 1)
|
|
prepare_vision_frame = numpy.expand_dims(prepare_vision_frame, axis = 0).astype(numpy.float32)[:, :, ::-1] / 127.5 - 1
|
|
prepare_vision_frame = prepare_vision_frame.transpose(0, 3, 1, 2)
|
|
region_mask : Mask = face_parser.run(None,
|
|
{
|
|
face_parser.get_inputs()[0].name: prepare_vision_frame
|
|
})[0][0]
|
|
region_mask = numpy.isin(region_mask.argmax(0), [ FACE_MASK_REGIONS[region] for region in face_mask_regions ])
|
|
region_mask = cv2.resize(region_mask.astype(numpy.float32), crop_vision_frame.shape[:2][::-1])
|
|
region_mask = (cv2.GaussianBlur(region_mask.clip(0, 1), (0, 0), 5).clip(0.5, 1) - 0.5) * 2
|
|
return region_mask
|
|
|
|
|
|
def create_mouth_mask(face_landmark_68 : FaceLandmark68) -> Mask:
|
|
convex_hull = cv2.convexHull(face_landmark_68[numpy.r_[3:14, 31:36]].astype(numpy.int32))
|
|
mouth_mask : Mask = numpy.zeros((512, 512), dtype = numpy.float32)
|
|
mouth_mask = cv2.fillConvexPoly(mouth_mask, convex_hull, 1.0)
|
|
mouth_mask = cv2.erode(mouth_mask.clip(0, 1), numpy.ones((21, 3)))
|
|
mouth_mask = cv2.GaussianBlur(mouth_mask, (0, 0), sigmaX = 1, sigmaY = 15)
|
|
return mouth_mask
|