
* feat/yoloface (#334)
* added yolov8 to face_detector (#323)
* added yolov8 to face_detector
* added yolov8 to face_detector
* Initial cleanup and renaming
* Update README
* refactored detect_with_yoloface (#329)
* refactored detect_with_yoloface
* apply review
* Change order again
* Restore working code
* modified code (#330)
* refactored detect_with_yoloface
* apply review
* use temp_frame in detect_with_yoloface
* reorder
* modified
* reorder models
* Tiny cleanup
---------
Co-authored-by: tamoharu <133945583+tamoharu@users.noreply.github.com>
* include audio file functions (#336)
* Add testing for audio handlers
* Change order
* Fix naming
* Use correct typing in choices
* Update help message for arguments, Notation based wording approach (#347)
* Update help message for arguments, Notation based wording approach
* Fix installer
* Audio functions (#345)
* Update ffmpeg.py
* Create audio.py
* Update ffmpeg.py
* Update audio.py
* Update audio.py
* Update typing.py
* Update ffmpeg.py
* Update audio.py
* Rename Frame to VisionFrame (#346)
* Minor tidy up
* Introduce audio testing
* Add more todo for testing
* Add more todo for testing
* Fix indent
* Enable venv on the fly
* Enable venv on the fly
* Revert venv on the fly
* Revert venv on the fly
* Force Gradio to shut up
* Force Gradio to shut up
* Clear temp before processing
* Reduce terminal output
* include audio file functions
* Enforce output resolution on merge video
* Minor cleanups
* Add age and gender to face debugger items (#353)
* Add age and gender to face debugger items
* Rename like suggested in the code review
* Fix the output framerate vs. time
* Lip Sync (#356)
* Cli implementation of wav2lip
* - create get_first_item()
- remove non gan wav2lip model
- implement video memory strategy
- implement get_reference_frame()
- implement process_image()
- rearrange crop_mask_list
- implement test_cli
* Simplify testing
* Rename to lip syncer
* Fix testing
* Fix testing
* Minor cleanup
* Cuda 12 installer (#362)
* Make cuda nightly (12) the default
* Better keep legacy cuda just in case
* Use CUDA and ROCM versions
* Remove MacOS options from installer (CoreML include in default package)
* Add lip-syncer support to source component
* Add lip-syncer support to source component
* Fix the check in the source component
* Add target image check
* Introduce more helpers to suite the lip-syncer needs
* Downgrade onnxruntime as of buggy 1.17.0 release
* Revert "Downgrade onnxruntime as of buggy 1.17.0 release"
This reverts commit f4a7ae6824
.
* More testing and add todos
* Fix the frame processor API to at least not throw errors
* Introduce dict based frame processor inputs (#364)
* Introduce dict based frame processor inputs
* Forgot to adjust webcam
* create path payloads (#365)
* create index payload to paths for process_frames
* rename to payload_paths
* This code now is poetry
* Fix the terminal output
* Make lip-syncer work in the preview
* Remove face debugger test for now
* Reoder reference_faces, Fix testing
* Use inswapper_128 on buggy onnxruntime 1.17.0
* Undo inswapper_128_fp16 duo broken onnxruntime 1.17.0
* Undo inswapper_128_fp16 duo broken onnxruntime 1.17.0
* Fix lip_syncer occluder & region mask issue
* Fix preview once in case there was no output video fps
* fix lip_syncer custom fps
* remove unused import
* Add 68 landmark functions (#367)
* Add 68 landmark model
* Add landmark to face object
* Re-arrange and modify typing
* Rename function
* Rearrange
* Rearrange
* ignore type
* ignore type
* change type
* ignore
* name
* Some cleanup
* Some cleanup
* Opps, I broke something
* Feat/face analyser refactoring (#369)
* Restructure face analyser and start TDD
* YoloFace and Yunet testing are passing
* Remove offset from yoloface detection
* Cleanup code
* Tiny fix
* Fix get_many_faces()
* Tiny fix (again)
* Use 320x320 fallback for retinaface
* Fix merging mashup
* Upload wave2lip model
* Upload 2dfan2 model and rename internal to face_predictor
* Downgrade onnxruntime for most cases
* Update for the face debugger to render landmark 68
* Try to make detect_face_landmark_68() and detect_gender_age() more uniform
* Enable retinaface testing for 320x320
* Make detect_face_landmark_68() and detect_gender_age() as uniform as … (#370)
* Make detect_face_landmark_68() and detect_gender_age() as uniform as possible
* Revert landmark scale and translation
* Make box-mask for lip-syncer adjustable
* Add create_bbox_from_landmark()
* Remove currently unused code
* Feat/uniface (#375)
* add uniface (#373)
* Finalize UniFace implementation
---------
Co-authored-by: Harisreedhar <46858047+harisreedhar@users.noreply.github.com>
* My approach how todo it
* edit
* edit
* replace vertical blur with gaussian
* remove region mask
* Rebase against next and restore method
* Minor improvements
* Minor improvements
* rename & add forehead padding
* Adjust and host uniface model
* Use 2dfan4 model
* Rename to face landmarker
* Feat/replace bbox with bounding box (#380)
* Add landmark 68 to 5 convertion
* Add landmark 68 to 5 convertion
* Keep 5, 5/68 and 68 landmarks
* Replace kps with landmark
* Replace bbox with bounding box
* Reshape face_landmark5_list different
* Make yoloface the default
* Move convert_face_landmark_68_to_5 to face_helper
* Minor spacing issue
* Dynamic detector sizes according to model (#382)
* Dynamic detector sizes according to model
* Dynamic detector sizes according to model
* Undo false commited files
* Add lib syncer model to the UI
* fix halo (#383)
* Bump to 2.3.0
* Update README and wording
* Update README and wording
* Fix spacing
* Apply _vision suffix
* Apply _vision suffix
* Apply _vision suffix
* Apply _vision suffix
* Apply _vision suffix
* Apply _vision suffix
* Apply _vision suffix, Move mouth mask to face_masker.py
* Apply _vision suffix
* Apply _vision suffix
* increase forehead padding
---------
Co-authored-by: tamoharu <133945583+tamoharu@users.noreply.github.com>
Co-authored-by: Harisreedhar <46858047+harisreedhar@users.noreply.github.com>
132 lines
4.0 KiB
Python
132 lines
4.0 KiB
Python
from typing import Optional, List, Tuple
|
|
from functools import lru_cache
|
|
import cv2
|
|
|
|
from facefusion.typing import VisionFrame, Resolution
|
|
from facefusion.choices import video_template_sizes
|
|
from facefusion.filesystem import is_image, is_video
|
|
|
|
|
|
def get_video_frame(video_path : str, frame_number : int = 0) -> Optional[VisionFrame]:
|
|
if is_video(video_path):
|
|
video_capture = cv2.VideoCapture(video_path)
|
|
if video_capture.isOpened():
|
|
frame_total = video_capture.get(cv2.CAP_PROP_FRAME_COUNT)
|
|
video_capture.set(cv2.CAP_PROP_POS_FRAMES, min(frame_total, frame_number - 1))
|
|
has_vision_frame, vision_frame = video_capture.read()
|
|
video_capture.release()
|
|
if has_vision_frame:
|
|
return vision_frame
|
|
return None
|
|
|
|
|
|
def count_video_frame_total(video_path : str) -> int:
|
|
if is_video(video_path):
|
|
video_capture = cv2.VideoCapture(video_path)
|
|
if video_capture.isOpened():
|
|
video_frame_total = int(video_capture.get(cv2.CAP_PROP_FRAME_COUNT))
|
|
video_capture.release()
|
|
return video_frame_total
|
|
return 0
|
|
|
|
|
|
def detect_video_fps(video_path : str) -> Optional[float]:
|
|
if is_video(video_path):
|
|
video_capture = cv2.VideoCapture(video_path)
|
|
if video_capture.isOpened():
|
|
video_fps = video_capture.get(cv2.CAP_PROP_FPS)
|
|
video_capture.release()
|
|
return video_fps
|
|
return None
|
|
|
|
|
|
def detect_video_resolution(video_path : str) -> Optional[Tuple[float, float]]:
|
|
if is_video(video_path):
|
|
video_capture = cv2.VideoCapture(video_path)
|
|
if video_capture.isOpened():
|
|
width = video_capture.get(cv2.CAP_PROP_FRAME_WIDTH)
|
|
height = video_capture.get(cv2.CAP_PROP_FRAME_HEIGHT)
|
|
video_capture.release()
|
|
return width, height
|
|
return None
|
|
|
|
|
|
def create_video_resolutions(video_path : str) -> Optional[List[str]]:
|
|
temp_resolutions = []
|
|
video_resolutions = []
|
|
video_resolution = detect_video_resolution(video_path)
|
|
|
|
if video_resolution:
|
|
width, height = video_resolution
|
|
temp_resolutions.append(normalize_resolution(video_resolution))
|
|
for template_size in video_template_sizes:
|
|
if width > height:
|
|
temp_resolutions.append(normalize_resolution((template_size * width / height, template_size)))
|
|
else:
|
|
temp_resolutions.append(normalize_resolution((template_size, template_size * height / width)))
|
|
temp_resolutions = sorted(set(temp_resolutions))
|
|
for temp in temp_resolutions:
|
|
video_resolutions.append(pack_resolution(temp))
|
|
return video_resolutions
|
|
return None
|
|
|
|
|
|
def normalize_resolution(resolution : Tuple[float, float]) -> Resolution:
|
|
width, height = resolution
|
|
|
|
if width and height:
|
|
normalize_width = round(width / 2) * 2
|
|
normalize_height = round(height / 2) * 2
|
|
return normalize_width, normalize_height
|
|
return 0, 0
|
|
|
|
|
|
def pack_resolution(resolution : Tuple[float, float]) -> str:
|
|
width, height = normalize_resolution(resolution)
|
|
return str(width) + 'x' + str(height)
|
|
|
|
|
|
def unpack_resolution(resolution : str) -> Resolution:
|
|
width, height = map(int, resolution.split('x'))
|
|
return width, height
|
|
|
|
|
|
def resize_frame_resolution(vision_frame : VisionFrame, max_width : int, max_height : int) -> VisionFrame:
|
|
height, width = vision_frame.shape[:2]
|
|
|
|
if height > max_height or width > max_width:
|
|
scale = min(max_height / height, max_width / width)
|
|
new_width = int(width * scale)
|
|
new_height = int(height * scale)
|
|
return cv2.resize(vision_frame, (new_width, new_height))
|
|
return vision_frame
|
|
|
|
|
|
def normalize_frame_color(vision_frame : VisionFrame) -> VisionFrame:
|
|
return cv2.cvtColor(vision_frame, cv2.COLOR_BGR2RGB)
|
|
|
|
|
|
@lru_cache(maxsize = 128)
|
|
def read_static_image(image_path : str) -> Optional[VisionFrame]:
|
|
return read_image(image_path)
|
|
|
|
|
|
def read_static_images(image_paths : List[str]) -> Optional[List[VisionFrame]]:
|
|
frames = []
|
|
if image_paths:
|
|
for image_path in image_paths:
|
|
frames.append(read_static_image(image_path))
|
|
return frames
|
|
|
|
|
|
def read_image(image_path : str) -> Optional[VisionFrame]:
|
|
if is_image(image_path):
|
|
return cv2.imread(image_path)
|
|
return None
|
|
|
|
|
|
def write_image(image_path : str, frame : VisionFrame) -> bool:
|
|
if image_path:
|
|
return cv2.imwrite(image_path, frame)
|
|
return False
|