
* Simplify bbox access
* Code cleanup
* Simplify bbox access
* Move code to face helper
* Swap and paste back without insightface
* Swap and paste back without insightface
* Remove semaphore where possible
* Improve paste back performance
* Cosmetic changes
* Move the predictor to ONNX to avoid tensorflow, Use video ranges for prediction
* Make CI happy
* Move template and size to the options
* Fix different color on box
* Uniform model handling for predictor
* Uniform frame handling for predictor
* Pass kps direct to warp_face
* Fix urllib
* Analyse based on matches
* Analyse based on rate
* Fix CI
* ROCM and OpenVINO mapping for torch backends
* Fix the paste back speed
* Fix import
* Replace retinaface with yunet (#168)
* Remove insightface dependency
* Fix urllib
* Some fixes
* Analyse based on matches
* Analyse based on rate
* Fix CI
* Migrate to Yunet
* Something is off here
* We indeed need semaphore for yunet
* Normalize the normed_embedding
* Fix download of models
* Fix download of models
* Fix download of models
* Add score and improve affine_matrix
* Temp fix for bbox out of frame
* Temp fix for bbox out of frame
* ROCM and OpenVINO mapping for torch backends
* Normalize bbox
* Implement gender age
* Cosmetics on cli args
* Prevent face jumping
* Fix the paste back speed
* FIx import
* Introduce detection size
* Cosmetics on face analyser ARGS and globals
* Temp fix for shaking face
* Accurate event handling
* Accurate event handling
* Accurate event handling
* Set the reference_frame_number in face_selector component
* Simswap model (#171)
* Add simswap models
* Add ghost models
* Introduce normed template
* Conditional prepare and normalize for ghost
* Conditional prepare and normalize for ghost
* Get simswap working
* Get simswap working
* Fix refresh of swapper model
* Refine face selection and detection (#174)
* Refine face selection and detection
* Update README.md
* Fix some face analyser UI
* Fix some face analyser UI
* Introduce range handling for CLI arguments
* Introduce range handling for CLI arguments
* Fix some spacings
* Disable onnxruntime warnings
* Use cv2.blur over cv2.GaussianBlur for better performance
* Revert "Use cv2.blur over cv2.GaussianBlur for better performance"
This reverts commit bab666d6f9
.
* Prepare universal face detection
* Prepare universal face detection part2
* Reimplement retinaface
* Introduce cached anchors creation
* Restore filtering to enhance performance
* Minor changes
* Minor changes
* More code but easier to understand
* Minor changes
* Rename predictor to content analyser
* Change detection/recognition to detector/recognizer
* Fix crop frame borders
* Fix spacing
* Allow normalize output without a source
* Improve conditional set face reference
* Update dependencies
* Add timeout for get_download_size
* Fix performance due disorder
* Move models to assets repository, Adjust namings
* Refactor face analyser
* Rename models once again
* Fix spacing
* Highres simswap (#192)
* Introduce highres simswap
* Fix simswap 256 color issue (#191)
* Fix simswap 256 color issue
* Update face_swapper.py
* Normalize models and host in our repo
* Normalize models and host in our repo
---------
Co-authored-by: Harisreedhar <46858047+harisreedhar@users.noreply.github.com>
* Rename face analyser direction to face analyser order
* Improve the UI for face selector
* Add best-worst, worst-best detector ordering
* Clear as needed and fix zero score bug
* Fix linter
* Improve startup time by multi thread remote download size
* Just some cosmetics
* Normalize swagger source input, Add blendface_256 (unfinished)
* New paste back (#195)
* add new paste_back (#194)
* add new paste_back
* Update face_helper.py
* Update face_helper.py
* add commandline arguments and gui
* fix conflict
* Update face_mask.py
* type fix
* Clean some wording and typing
---------
Co-authored-by: Harisreedhar <46858047+harisreedhar@users.noreply.github.com>
* Clean more names, use blur range approach
* Add blur padding range
* Change the padding order
* Fix yunet filename
* Introduce face debugger
* Use percent for mask padding
* Ignore this
* Ignore this
* Simplify debugger output
* implement blendface (#198)
* Clean up after the genius
* Add gpen_bfr_256
* Cosmetics
* Ignore face_mask_padding on face enhancer
* Update face_debugger.py (#202)
* Shrink debug_face() to a minimum
* Mark as 2.0.0 release
* remove unused (#204)
* Apply NMS (#205)
* Apply NMS
* Apply NMS part2
* Fix restoreformer url
* Add debugger cli and gui components (#206)
* Add debugger cli and gui components
* update
* Polishing the types
* Fix usage in README.md
* Update onnxruntime
* Support for webp
* Rename paste-back to face-mask
* Add license to README
* Add license to README
* Extend face selector mode by one
* Update utilities.py (#212)
* Stop inline camera on stream
* Minor webcam updates
* Gracefully start and stop webcam
* Rename capture to video_capture
* Make get webcam capture pure
* Check webcam to not be None
* Remove some is not None
* Use index 0 for webcam
* Remove memory lookup within progress bar
* Less progress bar updates
* Uniform progress bar
* Use classic progress bar
* Fix image and video validation
* Use different hash for cache
* Use best-worse order for webcam
* Normalize padding like CSS
* Update preview
* Fix max memory
* Move disclaimer and license to the docs
* Update wording in README
* Add LICENSE.md
* Fix argument in README
---------
Co-authored-by: Harisreedhar <46858047+harisreedhar@users.noreply.github.com>
Co-authored-by: alex00ds <31631959+alex00ds@users.noreply.github.com>
95 lines
3.9 KiB
Python
95 lines
3.9 KiB
Python
from typing import Optional, Tuple, List
|
|
import tempfile
|
|
import gradio
|
|
|
|
import facefusion.globals
|
|
import facefusion.choices
|
|
from facefusion import wording
|
|
from facefusion.typing import OutputVideoEncoder
|
|
from facefusion.utilities import is_image, is_video
|
|
from facefusion.uis.typing import ComponentName
|
|
from facefusion.uis.core import get_ui_component, register_ui_component
|
|
|
|
OUTPUT_PATH_TEXTBOX : Optional[gradio.Textbox] = None
|
|
OUTPUT_IMAGE_QUALITY_SLIDER : Optional[gradio.Slider] = None
|
|
OUTPUT_VIDEO_ENCODER_DROPDOWN : Optional[gradio.Dropdown] = None
|
|
OUTPUT_VIDEO_QUALITY_SLIDER : Optional[gradio.Slider] = None
|
|
|
|
|
|
def render() -> None:
|
|
global OUTPUT_PATH_TEXTBOX
|
|
global OUTPUT_IMAGE_QUALITY_SLIDER
|
|
global OUTPUT_VIDEO_ENCODER_DROPDOWN
|
|
global OUTPUT_VIDEO_QUALITY_SLIDER
|
|
|
|
OUTPUT_PATH_TEXTBOX = gradio.Textbox(
|
|
label = wording.get('output_path_textbox_label'),
|
|
value = facefusion.globals.output_path or tempfile.gettempdir(),
|
|
max_lines = 1
|
|
)
|
|
OUTPUT_IMAGE_QUALITY_SLIDER = gradio.Slider(
|
|
label = wording.get('output_image_quality_slider_label'),
|
|
value = facefusion.globals.output_image_quality,
|
|
step = facefusion.choices.output_image_quality_range[1] - facefusion.choices.output_image_quality_range[0],
|
|
minimum = facefusion.choices.output_image_quality_range[0],
|
|
maximum = facefusion.choices.output_image_quality_range[-1],
|
|
visible = is_image(facefusion.globals.target_path)
|
|
)
|
|
OUTPUT_VIDEO_ENCODER_DROPDOWN = gradio.Dropdown(
|
|
label = wording.get('output_video_encoder_dropdown_label'),
|
|
choices = facefusion.choices.output_video_encoders,
|
|
value = facefusion.globals.output_video_encoder,
|
|
visible = is_video(facefusion.globals.target_path)
|
|
)
|
|
OUTPUT_VIDEO_QUALITY_SLIDER = gradio.Slider(
|
|
label = wording.get('output_video_quality_slider_label'),
|
|
value = facefusion.globals.output_video_quality,
|
|
step = facefusion.choices.output_video_quality_range[1] - facefusion.choices.output_video_quality_range[0],
|
|
minimum = facefusion.choices.output_video_quality_range[0],
|
|
maximum = facefusion.choices.output_video_quality_range[-1],
|
|
visible = is_video(facefusion.globals.target_path)
|
|
)
|
|
register_ui_component('output_path_textbox', OUTPUT_PATH_TEXTBOX)
|
|
|
|
|
|
def listen() -> None:
|
|
OUTPUT_PATH_TEXTBOX.change(update_output_path, inputs = OUTPUT_PATH_TEXTBOX)
|
|
OUTPUT_IMAGE_QUALITY_SLIDER.change(update_output_image_quality, inputs = OUTPUT_IMAGE_QUALITY_SLIDER)
|
|
OUTPUT_VIDEO_ENCODER_DROPDOWN.select(update_output_video_encoder, inputs = OUTPUT_VIDEO_ENCODER_DROPDOWN)
|
|
OUTPUT_VIDEO_QUALITY_SLIDER.change(update_output_video_quality, inputs = OUTPUT_VIDEO_QUALITY_SLIDER)
|
|
multi_component_names : List[ComponentName] =\
|
|
[
|
|
'source_image',
|
|
'target_image',
|
|
'target_video'
|
|
]
|
|
for component_name in multi_component_names:
|
|
component = get_ui_component(component_name)
|
|
if component:
|
|
for method in [ 'upload', 'change', 'clear' ]:
|
|
getattr(component, method)(remote_update, outputs = [ OUTPUT_IMAGE_QUALITY_SLIDER, OUTPUT_VIDEO_ENCODER_DROPDOWN, OUTPUT_VIDEO_QUALITY_SLIDER ])
|
|
|
|
|
|
def remote_update() -> Tuple[gradio.Slider, gradio.Dropdown, gradio.Slider]:
|
|
if is_image(facefusion.globals.target_path):
|
|
return gradio.Slider(visible = True), gradio.Dropdown(visible = False), gradio.Slider(visible = False)
|
|
if is_video(facefusion.globals.target_path):
|
|
return gradio.Slider(visible = False), gradio.Dropdown(visible = True), gradio.Slider(visible = True)
|
|
return gradio.Slider(visible = False), gradio.Dropdown(visible = False), gradio.Slider(visible = False)
|
|
|
|
|
|
def update_output_path(output_path : str) -> None:
|
|
facefusion.globals.output_path = output_path
|
|
|
|
|
|
def update_output_image_quality(output_image_quality : int) -> None:
|
|
facefusion.globals.output_image_quality = output_image_quality
|
|
|
|
|
|
def update_output_video_encoder(output_video_encoder: OutputVideoEncoder) -> None:
|
|
facefusion.globals.output_video_encoder = output_video_encoder
|
|
|
|
|
|
def update_output_video_quality(output_video_quality : int) -> None:
|
|
facefusion.globals.output_video_quality = output_video_quality
|