
* Simplify bbox access
* Code cleanup
* Simplify bbox access
* Move code to face helper
* Swap and paste back without insightface
* Swap and paste back without insightface
* Remove semaphore where possible
* Improve paste back performance
* Cosmetic changes
* Move the predictor to ONNX to avoid tensorflow, Use video ranges for prediction
* Make CI happy
* Move template and size to the options
* Fix different color on box
* Uniform model handling for predictor
* Uniform frame handling for predictor
* Pass kps direct to warp_face
* Fix urllib
* Analyse based on matches
* Analyse based on rate
* Fix CI
* ROCM and OpenVINO mapping for torch backends
* Fix the paste back speed
* Fix import
* Replace retinaface with yunet (#168)
* Remove insightface dependency
* Fix urllib
* Some fixes
* Analyse based on matches
* Analyse based on rate
* Fix CI
* Migrate to Yunet
* Something is off here
* We indeed need semaphore for yunet
* Normalize the normed_embedding
* Fix download of models
* Fix download of models
* Fix download of models
* Add score and improve affine_matrix
* Temp fix for bbox out of frame
* Temp fix for bbox out of frame
* ROCM and OpenVINO mapping for torch backends
* Normalize bbox
* Implement gender age
* Cosmetics on cli args
* Prevent face jumping
* Fix the paste back speed
* FIx import
* Introduce detection size
* Cosmetics on face analyser ARGS and globals
* Temp fix for shaking face
* Accurate event handling
* Accurate event handling
* Accurate event handling
* Set the reference_frame_number in face_selector component
* Simswap model (#171)
* Add simswap models
* Add ghost models
* Introduce normed template
* Conditional prepare and normalize for ghost
* Conditional prepare and normalize for ghost
* Get simswap working
* Get simswap working
* Fix refresh of swapper model
* Refine face selection and detection (#174)
* Refine face selection and detection
* Update README.md
* Fix some face analyser UI
* Fix some face analyser UI
* Introduce range handling for CLI arguments
* Introduce range handling for CLI arguments
* Fix some spacings
* Disable onnxruntime warnings
* Use cv2.blur over cv2.GaussianBlur for better performance
* Revert "Use cv2.blur over cv2.GaussianBlur for better performance"
This reverts commit bab666d6f9
.
* Prepare universal face detection
* Prepare universal face detection part2
* Reimplement retinaface
* Introduce cached anchors creation
* Restore filtering to enhance performance
* Minor changes
* Minor changes
* More code but easier to understand
* Minor changes
* Rename predictor to content analyser
* Change detection/recognition to detector/recognizer
* Fix crop frame borders
* Fix spacing
* Allow normalize output without a source
* Improve conditional set face reference
* Update dependencies
* Add timeout for get_download_size
* Fix performance due disorder
* Move models to assets repository, Adjust namings
* Refactor face analyser
* Rename models once again
* Fix spacing
* Highres simswap (#192)
* Introduce highres simswap
* Fix simswap 256 color issue (#191)
* Fix simswap 256 color issue
* Update face_swapper.py
* Normalize models and host in our repo
* Normalize models and host in our repo
---------
Co-authored-by: Harisreedhar <46858047+harisreedhar@users.noreply.github.com>
* Rename face analyser direction to face analyser order
* Improve the UI for face selector
* Add best-worst, worst-best detector ordering
* Clear as needed and fix zero score bug
* Fix linter
* Improve startup time by multi thread remote download size
* Just some cosmetics
* Normalize swagger source input, Add blendface_256 (unfinished)
* New paste back (#195)
* add new paste_back (#194)
* add new paste_back
* Update face_helper.py
* Update face_helper.py
* add commandline arguments and gui
* fix conflict
* Update face_mask.py
* type fix
* Clean some wording and typing
---------
Co-authored-by: Harisreedhar <46858047+harisreedhar@users.noreply.github.com>
* Clean more names, use blur range approach
* Add blur padding range
* Change the padding order
* Fix yunet filename
* Introduce face debugger
* Use percent for mask padding
* Ignore this
* Ignore this
* Simplify debugger output
* implement blendface (#198)
* Clean up after the genius
* Add gpen_bfr_256
* Cosmetics
* Ignore face_mask_padding on face enhancer
* Update face_debugger.py (#202)
* Shrink debug_face() to a minimum
* Mark as 2.0.0 release
* remove unused (#204)
* Apply NMS (#205)
* Apply NMS
* Apply NMS part2
* Fix restoreformer url
* Add debugger cli and gui components (#206)
* Add debugger cli and gui components
* update
* Polishing the types
* Fix usage in README.md
* Update onnxruntime
* Support for webp
* Rename paste-back to face-mask
* Add license to README
* Add license to README
* Extend face selector mode by one
* Update utilities.py (#212)
* Stop inline camera on stream
* Minor webcam updates
* Gracefully start and stop webcam
* Rename capture to video_capture
* Make get webcam capture pure
* Check webcam to not be None
* Remove some is not None
* Use index 0 for webcam
* Remove memory lookup within progress bar
* Less progress bar updates
* Uniform progress bar
* Use classic progress bar
* Fix image and video validation
* Use different hash for cache
* Use best-worse order for webcam
* Normalize padding like CSS
* Update preview
* Fix max memory
* Move disclaimer and license to the docs
* Update wording in README
* Add LICENSE.md
* Fix argument in README
---------
Co-authored-by: Harisreedhar <46858047+harisreedhar@users.noreply.github.com>
Co-authored-by: alex00ds <31631959+alex00ds@users.noreply.github.com>
170 lines
6.9 KiB
Python
170 lines
6.9 KiB
Python
import glob
|
|
import platform
|
|
import subprocess
|
|
import pytest
|
|
|
|
import facefusion.globals
|
|
from facefusion.utilities import conditional_download, extract_frames, create_temp, get_temp_directory_path, clear_temp, normalize_output_path, normalize_padding, is_file, is_directory, is_image, is_video, get_download_size, is_download_done, encode_execution_providers, decode_execution_providers
|
|
|
|
|
|
@pytest.fixture(scope = 'module', autouse = True)
|
|
def before_all() -> None:
|
|
facefusion.globals.temp_frame_quality = 100
|
|
facefusion.globals.trim_frame_start = None
|
|
facefusion.globals.trim_frame_end = None
|
|
facefusion.globals.temp_frame_format = 'png'
|
|
conditional_download('.assets/examples',
|
|
[
|
|
'https://github.com/facefusion/facefusion-assets/releases/download/examples/source.jpg',
|
|
'https://github.com/facefusion/facefusion-assets/releases/download/examples/target-240p.mp4'
|
|
])
|
|
subprocess.run([ 'ffmpeg', '-i', '.assets/examples/target-240p.mp4', '-vf', 'fps=25', '.assets/examples/target-240p-25fps.mp4' ])
|
|
subprocess.run([ 'ffmpeg', '-i', '.assets/examples/target-240p.mp4', '-vf', 'fps=30', '.assets/examples/target-240p-30fps.mp4' ])
|
|
subprocess.run([ 'ffmpeg', '-i', '.assets/examples/target-240p.mp4', '-vf', 'fps=60', '.assets/examples/target-240p-60fps.mp4' ])
|
|
|
|
|
|
@pytest.fixture(scope = 'function', autouse = True)
|
|
def before_each() -> None:
|
|
facefusion.globals.trim_frame_start = None
|
|
facefusion.globals.trim_frame_end = None
|
|
facefusion.globals.temp_frame_quality = 90
|
|
facefusion.globals.temp_frame_format = 'jpg'
|
|
|
|
|
|
def test_extract_frames() -> None:
|
|
target_paths =\
|
|
[
|
|
'.assets/examples/target-240p-25fps.mp4',
|
|
'.assets/examples/target-240p-30fps.mp4',
|
|
'.assets/examples/target-240p-60fps.mp4'
|
|
]
|
|
for target_path in target_paths:
|
|
temp_directory_path = get_temp_directory_path(target_path)
|
|
create_temp(target_path)
|
|
|
|
assert extract_frames(target_path, 30.0) is True
|
|
assert len(glob.glob1(temp_directory_path, '*.jpg')) == 324
|
|
|
|
clear_temp(target_path)
|
|
|
|
|
|
def test_extract_frames_with_trim_start() -> None:
|
|
facefusion.globals.trim_frame_start = 224
|
|
data_provider =\
|
|
[
|
|
('.assets/examples/target-240p-25fps.mp4', 55),
|
|
('.assets/examples/target-240p-30fps.mp4', 100),
|
|
('.assets/examples/target-240p-60fps.mp4', 212)
|
|
]
|
|
for target_path, frame_total in data_provider:
|
|
temp_directory_path = get_temp_directory_path(target_path)
|
|
create_temp(target_path)
|
|
|
|
assert extract_frames(target_path, 30.0) is True
|
|
assert len(glob.glob1(temp_directory_path, '*.jpg')) == frame_total
|
|
|
|
clear_temp(target_path)
|
|
|
|
|
|
def test_extract_frames_with_trim_start_and_trim_end() -> None:
|
|
facefusion.globals.trim_frame_start = 124
|
|
facefusion.globals.trim_frame_end = 224
|
|
data_provider =\
|
|
[
|
|
('.assets/examples/target-240p-25fps.mp4', 120),
|
|
('.assets/examples/target-240p-30fps.mp4', 100),
|
|
('.assets/examples/target-240p-60fps.mp4', 50)
|
|
]
|
|
for target_path, frame_total in data_provider:
|
|
temp_directory_path = get_temp_directory_path(target_path)
|
|
create_temp(target_path)
|
|
|
|
assert extract_frames(target_path, 30.0) is True
|
|
assert len(glob.glob1(temp_directory_path, '*.jpg')) == frame_total
|
|
|
|
clear_temp(target_path)
|
|
|
|
|
|
def test_extract_frames_with_trim_end() -> None:
|
|
facefusion.globals.trim_frame_end = 100
|
|
data_provider =\
|
|
[
|
|
('.assets/examples/target-240p-25fps.mp4', 120),
|
|
('.assets/examples/target-240p-30fps.mp4', 100),
|
|
('.assets/examples/target-240p-60fps.mp4', 50)
|
|
]
|
|
for target_path, frame_total in data_provider:
|
|
temp_directory_path = get_temp_directory_path(target_path)
|
|
create_temp(target_path)
|
|
|
|
assert extract_frames(target_path, 30.0) is True
|
|
assert len(glob.glob1(temp_directory_path, '*.jpg')) == frame_total
|
|
|
|
clear_temp(target_path)
|
|
|
|
|
|
def test_normalize_output_path() -> None:
|
|
if platform.system().lower() != 'windows':
|
|
assert normalize_output_path('.assets/examples/source.jpg', None, '.assets/examples/target-240p.mp4') == '.assets/examples/target-240p.mp4'
|
|
assert normalize_output_path(None, '.assets/examples/target-240p.mp4', '.assets/examples/target-240p.mp4') == '.assets/examples/target-240p.mp4'
|
|
assert normalize_output_path(None, '.assets/examples/target-240p.mp4', '.assets/examples') == '.assets/examples/target-240p.mp4'
|
|
assert normalize_output_path('.assets/examples/source.jpg', '.assets/examples/target-240p.mp4', '.assets/examples') == '.assets/examples/source-target-240p.mp4'
|
|
assert normalize_output_path(None, '.assets/examples/target-240p.mp4', '.assets/examples/output.mp4') == '.assets/examples/output.mp4'
|
|
assert normalize_output_path(None, '.assets/examples/target-240p.mp4', '.assets/output.mov') == '.assets/output.mp4'
|
|
assert normalize_output_path(None, '.assets/examples/target-240p.mp4', '.assets/examples/invalid') is None
|
|
assert normalize_output_path(None, '.assets/examples/target-240p.mp4', '.assets/invalid/output.mp4') is None
|
|
assert normalize_output_path(None, '.assets/examples/target-240p.mp4', 'invalid') is None
|
|
assert normalize_output_path('.assets/examples/source.jpg', '.assets/examples/target-240p.mp4', None) is None
|
|
|
|
|
|
def test_normalize_padding() -> None:
|
|
assert normalize_padding([ 0, 0, 0, 0 ]) == (0, 0, 0, 0)
|
|
assert normalize_padding([ 1 ]) == (1, 1, 1, 1)
|
|
assert normalize_padding([ 1, 2 ]) == (1, 2, 1, 2)
|
|
assert normalize_padding([ 1, 2, 3 ]) == (1, 2, 3, 2)
|
|
assert normalize_padding(None) is None
|
|
|
|
|
|
def test_is_file() -> None:
|
|
assert is_file('.assets/examples/source.jpg') is True
|
|
assert is_file('.assets/examples') is False
|
|
assert is_file('invalid') is False
|
|
|
|
|
|
def test_is_directory() -> None:
|
|
assert is_directory('.assets/examples') is True
|
|
assert is_directory('.assets/examples/source.jpg') is False
|
|
assert is_directory('invalid') is False
|
|
|
|
|
|
def test_is_image() -> None:
|
|
assert is_image('.assets/examples/source.jpg') is True
|
|
assert is_image('.assets/examples/target-240p.mp4') is False
|
|
assert is_image('invalid') is False
|
|
|
|
|
|
def test_is_video() -> None:
|
|
assert is_video('.assets/examples/target-240p.mp4') is True
|
|
assert is_video('.assets/examples/source.jpg') is False
|
|
assert is_video('invalid') is False
|
|
|
|
|
|
def test_get_download_size() -> None:
|
|
assert get_download_size('https://github.com/facefusion/facefusion-assets/releases/download/examples/target-240p.mp4') == 191675
|
|
assert get_download_size('https://github.com/facefusion/facefusion-assets/releases/download/examples/target-360p.mp4') == 370732
|
|
assert get_download_size('invalid') == 0
|
|
|
|
|
|
def test_is_download_done() -> None:
|
|
assert is_download_done('https://github.com/facefusion/facefusion-assets/releases/download/examples/target-240p.mp4', '.assets/examples/target-240p.mp4') is True
|
|
assert is_download_done('https://github.com/facefusion/facefusion-assets/releases/download/examples/target-240p.mp4','invalid') is False
|
|
assert is_download_done('invalid', 'invalid') is False
|
|
|
|
|
|
def test_encode_execution_providers() -> None:
|
|
assert encode_execution_providers([ 'CPUExecutionProvider' ]) == [ 'cpu' ]
|
|
|
|
|
|
def test_decode_execution_providers() -> None:
|
|
assert decode_execution_providers([ 'cpu' ]) == [ 'CPUExecutionProvider' ]
|