
* Cosmetic changes * Cosmetic changes * Run single warm up for the benchmark suite * Use latest version of Gradio * More testing * Introduce basic installer * Fix typo * Move more to installer file * Fix the installer with the uninstall all trick * Adjust wording * Fix coreml in installer * Allow Pyhton 3.9 * Add VENV to installer * Just some cosmetics * Just some cosmetics * Dedicated headless mode, Refine API of UI layouts * Use --headless for pytest * Fix testing for Windows * Normalize output path that lacks extension * Fix CI for Windows * Fix CI for Windows * UI to change output path * Add conda support for the installer * Improve installer quite a bit * Drop conda support * Install community wheels for coreml silicon * Improve output video component * Fix silicon wheel downloading * Remove venv from installer as we cannot activate via subprocess * Use join to create wheel name * Refine the output path normalization * Refine the output path normalization * Introduce ProcessMode and rename some methods * Introduce ProcessMode and rename some methods * Basic webcam integration and open_ffmpeg() * Basic webcam integration part2 * Benchmark resolutions now selectable * Rename benchmark resolution back to benchmark runs * Fix repeating output path in UI * Keep output_path untouched if not resolvable * Add more cases to normalize output path * None for those tests that don't take source path into account * Finish basic webcam integration, UI layout now with custom run() * Fix CI and hide link in webcam UI * Cosmetics on webcam UI * Move get_device to utilities * Fix CI * Introduce output-image-quality, Show and hide UI according to target media type * Benchmark with partial result updates * fix: trim frame sliders not appearing after draggin video * fix: output and temp frame setting inputs not appearing * Fix: set increased update delay to 250ms to let Gradio update conditional inputs properly * Reverted .gitignore * Adjust timings * Remove timeout hacks and get fully event driven * Update dependencies * Update dependencies * Revert NSFW library, Conditional unset trim args * Face selector works better on preview slider release * Add limit resources to UI * Introduce vision.py for all CV2 operations, Rename some methods * Add restoring audio failed * Decouple updates for preview image and preview frame slider, Move reduce_preview_frame to vision * Refactor detect_fps based on JSON output * Only webcam when open * More conditions to vision.py * Add udp and v4l2 streaming to webcam UI * Detect v4l2 device to be used * Refactor code a bit * Use static max memory for UI * Fix CI * Looks stable to me * Update preview * Update preview --------- Co-authored-by: Sumit <vizsumit@gmail.com>
32 lines
1.3 KiB
Python
32 lines
1.3 KiB
Python
import subprocess
|
|
import pytest
|
|
|
|
from facefusion import wording
|
|
from facefusion.utilities import conditional_download
|
|
|
|
|
|
@pytest.fixture(scope = 'module', autouse = True)
|
|
def before_all() -> None:
|
|
conditional_download('.assets/examples',
|
|
[
|
|
'https://github.com/facefusion/facefusion-assets/releases/download/examples/source.jpg',
|
|
'https://github.com/facefusion/facefusion-assets/releases/download/examples/target-1080p.mp4'
|
|
])
|
|
subprocess.run([ 'ffmpeg', '-i', '.assets/examples/target-1080p.mp4', '-vframes', '1', '.assets/examples/target-1080p.jpg' ])
|
|
|
|
|
|
def test_image_to_image() -> None:
|
|
commands = [ 'python', 'run.py', '-s', '.assets/examples/source.jpg', '-t', '.assets/examples/target-1080p.jpg', '-o', '.assets/examples', '--headless' ]
|
|
run = subprocess.run(commands, stdout = subprocess.PIPE)
|
|
|
|
assert run.returncode == 0
|
|
assert wording.get('processing_image_succeed') in run.stdout.decode()
|
|
|
|
|
|
def test_image_to_video() -> None:
|
|
commands = [ 'python', 'run.py', '-s', '.assets/examples/source.jpg', '-t', '.assets/examples/target-1080p.mp4', '-o', '.assets/examples', '--trim-frame-end', '10', '--headless' ]
|
|
run = subprocess.run(commands, stdout = subprocess.PIPE)
|
|
|
|
assert run.returncode == 0
|
|
assert wording.get('processing_video_succeed') in run.stdout.decode()
|