
* Cleanup after age modifier PR * Cleanup after age modifier PR * Use OpenVino 2024.2.0 for installer * Prepare 3.0.0 for installer * Fix benchmark suite, Introduce sync_item() for state manager * Fix lint * Render slide preview also in lower res * Lower thread and queue count to avoid false usage * Fix spacing * Feat/jobs UI (#627) * Jobs UI part1 * Change naming * Jobs UI part2 * Jobs UI part3 * Jobs UI part4 * Jobs UI part4 * Jobs UI part5 * Jobs UI part6 * Jobs UI part7 * Jobs UI part8 * Jobs UI part9 * Jobs UI part10 * Jobs UI part11 * Jobs UI part12 * Fix rebase * Jobs UI part13 * Jobs UI part14 * Jobs UI part15 * changes (#626) * Remove useless ui registration * Remove useless ui registration * move job_list.py replace [0] with get_first() * optimize imports * fix date None problem add test job list * Jobs UI part16 * Jobs UI part17 * Jobs UI part18 * Jobs UI part19 * Jobs UI part20 * Jobs UI part21 * Jobs UI part22 * move job_list_options * Add label to job status checkbox group * changes * changes --------- Co-authored-by: Harisreedhar <46858047+harisreedhar@users.noreply.github.com> * Update some dependencies * UI helper to convert 'none' * validate job (#628) * changes * changes * add test * changes * changes * Minor adjustments * Replace is_json with is_file * Handle empty and invalid json in job_list * Handle empty and invalid json in job_list * Handle empty and invalid json in job_list * Work on the job manager UI * Cosmetic changes on common helper * Just make it work for now * Just make it work for now * Just make it work for now * Streamline the step index lookups * Hide footer * Simplify instant runner * Simplify instant runner UI and job manager UI * Fix empty step choices * Fix empty step choices * Fix none values in UI * Rework on benchmark (add warmup) and job list * Improve ValueAndUnit * Add step 1 of x output * Cosmetic changes on the UI * Fix invalid job file names * Update preview * Introducing has_step() and sorting out insert behaviour * Introducing has_step() and sorting out insert behaviour * Add [ none ] to some job id dropdowns * Make updated dropdown values kinda perfect * Make updated dropdown values kinda perfect * Fix testing * Minor improvement on UI * Fix false config lookup * Remove TensorRT as our models are not made for it * Feat/cli commands second try rev2 (#640) * Refactor CLI to commands * Refactor CLI to commands part2 * Refactor CLI to commands part3 * Refactor CLI to commands part4 * Rename everything to facefusion.py * Refactor CLI to commands part5 * Refactor CLI to commands part6 * Adjust testing * Fix lint * Fix lint * Fix lint * Refactor CLI to commands part7 * Extend State typing * Fix false config lookup, adjust logical orders * Move away from passing program part1 * Move away from passing program part2 * Move away from passing program part3 * Fix lint * Move away from passing program part4 * ui-args update * ui-args update * ui-args update * temporary type fix * Move away from passing program part5 * remove unused * creates args.py * Move away from passing program part6 * Move away from passing program part7 --------- Co-authored-by: Harisreedhar <46858047+harisreedhar@users.noreply.github.com> * Minor optimizations * Update commands in README * Fix job-retry command * Fix multi runs via UI * add more job keys * Cleanup codebase * One method to create inference session (#641) * One method to create inference session * Remove warnings, as there are none * Remember job id during processing * Fix face masker config block * Change wording * Prevent age modifier from using CoreML * add expression restorer (#642) * add expression restorer * fix import * fix lint * changes * changes * changes * Host the final model for expression restorer * Insert step on the given index * UI workover (#644) * UI workover part1 * Introduce ComponentOptions * Only set Media components to None when visibility changes * Clear static faces and reference faces between step processing * Minor changes * Minor changes * Fix testing * Enable test_sanitize_path_for_windows (#646) * Dynamic download during job processing (#647) * Fix face masker UI * Rename run-headless to headless-run * Feat/split frame processor UI (#649) * Split frame processor UI * Split frame processor UI part3, Refactor get_model_initializer * Split frame processor UI part4 * Feat/rename frame processors (#651) * Rename frame processors * Rename frame processors part2 * Fix imports Conflicts: facefusion/uis/layouts/benchmark.py facefusion/uis/layouts/default.py * Fix imports * Cosmetic changes * Fix multi threading for ROCm * Change temp frames pattern * Adjust terminal help * remove expression restorer (#653) * Expression restorer as processor (#655) * add expression restorer * changes * Cleanup code * Add TensorRT support back * Add TensorRT support back * Add TensorRT support back * changes (#656) * Change minor wording * Fix face enhancer slider * Add more typing * Fix expression-restorer when using trim (#659) * changes * changes * Rework/model and inference pool part2 (#660) * Rework on model and inference pool * Introduce inference sources and pools part1 * Introduce inference sources and pools part2 * Introduce inference sources and pools part3 * Introduce inference sources and pools part4 * Introduce inference sources and pools part5 * Introduce inference sources and pools part6 * Introduce inference sources and pools part6 * Introduce inference sources and pools part6 * Introduce inference sources and pools part7 * Introduce inference sources and pools part7 * Introduce inference sources and pools part8 * Introduce inference sources and pools part9 * Introduce inference sources and pools part10 * Introduce inference sources and pools part11 * Introduce inference sources and pools part11 * Introduce inference sources and pools part11 * Introduce inference sources and pools part12 * Reorganize the face masker UI * Fix trim in UI * Feat/hashed sources (#668) * Introduce source helper * Remove post_check() and just use process_manager * Remove post_check() part2 * Add hash based downloads * Add hash based downloads part2 * Add hash based downloads part3 * Add hash based downloads part4 * Add hash based downloads part5 * Add hash based downloads part6 * Add hash based downloads part7 * Add hash based downloads part7 * Add hash based downloads part8 * Remove print * Prepare 3.0.0 release * Fix UI * Release the check when really done * Update inputs for live portrait * Update to 3.0.0 releases, extend download postfix * Move files to the right place * Logging for the hash and source validation * Changing logic to handle corrupt sources * Fix typo * Use names over get_inputs(), Remove set_options() call * Age modifier now works for CoreML too * Update age_modifier.py * Add video encoder h264_videotoolbox and hevc_videotoolbox * Face editor add eye gaze & remove open factor sliders (#670) * changes * add eye gaze * changes * cleanup * add eyebrow control * changes * changes * Feat/terminal UI (#671) * Introduce terminal to the UI * Introduce terminal to the UI part2 * Introduce terminal to the UI part2 * Introduce terminal to the UI part2 * Calc range step to avoid weird values * Use Sequence for ranges * Use Sequence for ranges * changes (#673) * Use Sequence for ranges * Finalize terminal UI * Finalize terminal UI * Webcam cosmetics, Fix normalize fps to accept int * Cosmetic changes * Finalize terminal UI * Rename leftover typings * Fix wording * Fix rounding in metavar * Fix rounding in metavar * Rename to face classifier * Face editor lip moves (#677) * changes * changes * changes * Fix rounding in metavar * Rename to face classifier * changes * changes * update naming --------- Co-authored-by: henryruhs <info@henryruhs.com> * Fix wording * Feat/many landmarker + face analyser breakdown (#678) * Basic multi landmarker integration * Simplify some method names * Break into face_detector and face_landmarker * Fix cosmetics * Fix testing * Break into face_attributor and face_recognizer * Clear them all * Clear them all * Rename to face classifier * Rename to face classifier * Fix testing * Fix stuff * Add face landmarker model to UI * Add face landmarker model to UI part2 * Split the config * Split the UI * Improvement from code review * Improvement from code review * Validate args also for sub parsers * Remove clear of processors in process step * Allow finder control for the face editor * Fix lint * Improve testing performance * Remove unused file, Clear processors from the UI before job runs * Update the installer * Uniform set handler for swapper and detector in the UI * Fix example urls * Feat/inference manager (#684) * Introduce inference manager * Migrate all to inference manager * clean ini * Introduce app context based inference pools * Fix lint * Fix typing * Adjust layout * Less border radius * Rename app context names * Fix/live portrait directml (#691) * changes (#690) * Adjust naming * Use our assets release * Adjust naming --------- Co-authored-by: Harisreedhar <46858047+harisreedhar@users.noreply.github.com> * Add caches to gitignore * Update dependencies and drop CUDA 11.8 support (#693) * Update dependencies and drop CUDA 11.8 support * Play save and keep numpy 1.x.x * Improve TensorRT optimization * changes * changes * changes * changes * changes * changes * changes * changes * changes * Reuse inference sessions (#696) * Fix force-download command * Refactor processors to forward() (#698) * Install tensorrt when selecting cuda * Minor changes * Use latest numpy * Fix limit system memory * Implement forward() for every inference (#699) * Implement forward() for every inference * Implement forward() for every inference * Implement forward() for every inference * Implement forward() for every inference * changes * changes * changes * changes * Feat/fairface (#710) * Replace gender_age model with fair face (#709) * changes * changes * changes * age dropdown to range-slider * Cleanup code * Cleanup code --------- Co-authored-by: Harisreedhar <46858047+harisreedhar@users.noreply.github.com> * Extend installer to set library paths for cuda and tensorrt (#707) * Extend installer to set library paths for cuda and tensorrt * Add refresh of conda env * Remove invalid commands * Set the conda env according to operating system * Update for ROCm 6.2 * fix installer * Aktualisieren von installer.py * Add missing face selector keys * Try to keep original LD_LIBRARY_PATH * windows support installer * Final touch to the installer * Remove spaces * Simplidy collect_model_downloads() * Fix force download for once and forever * Housekeeping (#715) * changes * changes * changes * Fix performance part1 * Fix mixed states (#689) * Fix mixed states * Add missing sync for job args * Move UnionStateXXX to base typing * Undo * Remove UnionStateXXX * Fix app context performance lookup (#717) * Restore performance for inswapper * Mover upper() to the logger * Undo debugging * Move TensorRT installation to docs * Sort out log level typing, Add log level UI dropdown (#719) * Fix inference pool part1 * Validate conda library paths existence * Default face selector order to large-small * Fix inference pool context according to execution provider (#720) * Fix app context under Windows * CUDA and TensorRT update for the installer * Remove concept of static processor modules * Revert false commit * Change event order makes a difference * Fix multi model context in inference pool (#721) * Fix multi model context in inference pool * Fix multi model context in inference pool part2 * Use latest gradio to avoid fastapi bug * Rework on the Windows Installer * Use embedding converter (#724) * changes (#723) * Upload models to official assets repo --------- Co-authored-by: Harisreedhar <46858047+harisreedhar@users.noreply.github.com> * Rework on the Windows Installer part2 * Resolve subprocess calls (#726) * Experiment * Resolve subprocess calls to cover edge cases like broken PATH * Adjust wording * Simplify code * Rework on the Windows Installer part3 * Rework on the Windows Installer part4 * Numpy fix for older onnxruntime * changes (#729) * Add space * Add MacOS installer * Use favicon * Fix disabled logger * Layout polishing (#731) * Update dependencies, Adjust many face landmarker logic * Cosmetics changes * Should be button * Introduce randomized action button * Fix update of lip syncer and expression restorer * Stop sharing inference session this prevents flushing VRAM * Fix test * Fix urls * Prepare release * Vanish inquirer * Sticky preview does not work on portrait images * Sticky preview only for landscape images and videos * remove gradio tunnel env * Change wording and deeplinks * increase peppa landmark score offset * Change wording * Graceful exit install.py * Just adding a required * Cannot use the exit_helper * Rename our model * Change color of face-landmark-68/5 * Limit liveportrait (#739) * changes * changes * changes * Cleanup * Cleanup --------- Co-authored-by: harisreedhar <h4harisreedhar.s.s@gmail.com> Co-authored-by: Harisreedhar <46858047+harisreedhar@users.noreply.github.com> * limit expression restorer * change expression restorer 0-100 range * Use 256x icon * changes * changes * changes * changes * Limit face editor rotation (#745) * changes (#743) * Finish euler methods --------- Co-authored-by: Harisreedhar <46858047+harisreedhar@users.noreply.github.com> * Use different coveralls badge * Move about wording * Shorten scope in the logger * changes * changes * Shorten scope in the logger * fix typo * Simplify the arcface converter names * Update preview --------- Co-authored-by: Harisreedhar <46858047+harisreedhar@users.noreply.github.com> Co-authored-by: harisreedhar <h4harisreedhar.s.s@gmail.com>
303 lines
8.2 KiB
Python
Executable File
303 lines
8.2 KiB
Python
Executable File
from collections import namedtuple
|
|
from typing import Any, Callable, Dict, List, Literal, Optional, Tuple, TypedDict
|
|
|
|
import numpy
|
|
from numpy.typing import NDArray
|
|
from onnxruntime import InferenceSession
|
|
|
|
Scale = float
|
|
Score = float
|
|
Angle = int
|
|
|
|
Detection = NDArray[Any]
|
|
Prediction = NDArray[Any]
|
|
|
|
BoundingBox = NDArray[Any]
|
|
FaceLandmark5 = NDArray[Any]
|
|
FaceLandmark68 = NDArray[Any]
|
|
FaceLandmarkSet = TypedDict('FaceLandmarkSet',
|
|
{
|
|
'5' : FaceLandmark5, #type:ignore[valid-type]
|
|
'5/68' : FaceLandmark5, #type:ignore[valid-type]
|
|
'68' : FaceLandmark68, #type:ignore[valid-type]
|
|
'68/5' : FaceLandmark68 #type:ignore[valid-type]
|
|
})
|
|
FaceScoreSet = TypedDict('FaceScoreSet',
|
|
{
|
|
'detector' : Score,
|
|
'landmarker' : Score
|
|
})
|
|
Embedding = NDArray[numpy.float64]
|
|
Gender = Literal['female', 'male']
|
|
Age = range
|
|
Race = Literal['white', 'black', 'latino', 'asian', 'indian', 'arabic']
|
|
Face = namedtuple('Face',
|
|
[
|
|
'bounding_box',
|
|
'score_set',
|
|
'landmark_set',
|
|
'angle',
|
|
'embedding',
|
|
'normed_embedding',
|
|
'gender',
|
|
'age',
|
|
'race'
|
|
])
|
|
FaceSet = Dict[str, List[Face]]
|
|
FaceStore = TypedDict('FaceStore',
|
|
{
|
|
'static_faces' : FaceSet,
|
|
'reference_faces': FaceSet
|
|
})
|
|
|
|
VisionFrame = NDArray[Any]
|
|
Mask = NDArray[Any]
|
|
Points = NDArray[Any]
|
|
Distance = NDArray[Any]
|
|
Matrix = NDArray[Any]
|
|
Anchors = NDArray[Any]
|
|
Translation = NDArray[Any]
|
|
|
|
AudioBuffer = bytes
|
|
Audio = NDArray[Any]
|
|
AudioChunk = NDArray[Any]
|
|
AudioFrame = NDArray[Any]
|
|
Spectrogram = NDArray[Any]
|
|
Mel = NDArray[Any]
|
|
MelFilterBank = NDArray[Any]
|
|
|
|
Fps = float
|
|
Padding = Tuple[int, int, int, int]
|
|
Orientation = Literal['landscape', 'portrait']
|
|
Resolution = Tuple[int, int]
|
|
|
|
ProcessState = Literal['checking', 'processing', 'stopping', 'pending']
|
|
QueuePayload = TypedDict('QueuePayload',
|
|
{
|
|
'frame_number' : int,
|
|
'frame_path' : str
|
|
})
|
|
Args = Dict[str, Any]
|
|
UpdateProgress = Callable[[int], None]
|
|
ProcessFrames = Callable[[List[str], List[QueuePayload], UpdateProgress], None]
|
|
ProcessStep = Callable[[str, int, Args], bool]
|
|
|
|
Content = Dict[str, Any]
|
|
|
|
WarpTemplate = Literal['arcface_112_v1', 'arcface_112_v2', 'arcface_128_v2', 'ffhq_512']
|
|
WarpTemplateSet = Dict[WarpTemplate, NDArray[Any]]
|
|
ProcessMode = Literal['output', 'preview', 'stream']
|
|
|
|
ErrorCode = Literal[0, 1, 2, 3, 4]
|
|
LogLevel = Literal['error', 'warn', 'info', 'debug']
|
|
LogLevelSet = Dict[LogLevel, int]
|
|
|
|
TableHeaders = List[str]
|
|
TableContents = List[List[Any]]
|
|
|
|
VideoMemoryStrategy = Literal['strict', 'moderate', 'tolerant']
|
|
FaceDetectorModel = Literal['many', 'retinaface', 'scrfd', 'yoloface']
|
|
FaceLandmarkerModel = Literal['many', '2dfan4', 'peppa_wutz']
|
|
FaceDetectorSet = Dict[FaceDetectorModel, List[str]]
|
|
FaceSelectorMode = Literal['many', 'one', 'reference']
|
|
FaceSelectorOrder = Literal['left-right', 'right-left', 'top-bottom', 'bottom-top', 'small-large', 'large-small', 'best-worst', 'worst-best']
|
|
FaceMaskType = Literal['box', 'occlusion', 'region']
|
|
FaceMaskRegion = Literal['skin', 'left-eyebrow', 'right-eyebrow', 'left-eye', 'right-eye', 'glasses', 'nose', 'mouth', 'upper-lip', 'lower-lip']
|
|
TempFrameFormat = Literal['jpg', 'png', 'bmp']
|
|
OutputAudioEncoder = Literal['aac', 'libmp3lame', 'libopus', 'libvorbis']
|
|
OutputVideoEncoder = Literal['libx264', 'libx265', 'libvpx-vp9', 'h264_nvenc', 'hevc_nvenc', 'h264_amf', 'hevc_amf', 'h264_videotoolbox', 'hevc_videotoolbox']
|
|
OutputVideoPreset = Literal['ultrafast', 'superfast', 'veryfast', 'faster', 'fast', 'medium', 'slow', 'slower', 'veryslow']
|
|
|
|
Download = TypedDict('Download',
|
|
{
|
|
'url' : str,
|
|
'path' : str
|
|
})
|
|
DownloadSet = Dict[str, Download]
|
|
|
|
ModelOptions = Dict[str, Any]
|
|
ModelSet = Dict[str, ModelOptions]
|
|
ModelInitializer = NDArray[Any]
|
|
|
|
ExecutionProviderKey = Literal['cpu', 'coreml', 'cuda', 'directml', 'openvino', 'rocm', 'tensorrt']
|
|
ExecutionProviderValue = Literal['CPUExecutionProvider', 'CoreMLExecutionProvider', 'CUDAExecutionProvider', 'DmlExecutionProvider', 'OpenVINOExecutionProvider', 'ROCMExecutionProvider', 'TensorrtExecutionProvider']
|
|
ExecutionProviderSet = Dict[ExecutionProviderKey, ExecutionProviderValue]
|
|
|
|
ValueAndUnit = TypedDict('ValueAndUnit',
|
|
{
|
|
'value' : int,
|
|
'unit' : str
|
|
})
|
|
ExecutionDeviceFramework = TypedDict('ExecutionDeviceFramework',
|
|
{
|
|
'name' : str,
|
|
'version' : str
|
|
})
|
|
ExecutionDeviceProduct = TypedDict('ExecutionDeviceProduct',
|
|
{
|
|
'vendor' : str,
|
|
'name' : str
|
|
})
|
|
ExecutionDeviceVideoMemory = TypedDict('ExecutionDeviceVideoMemory',
|
|
{
|
|
'total' : ValueAndUnit,
|
|
'free' : ValueAndUnit
|
|
})
|
|
ExecutionDeviceUtilization = TypedDict('ExecutionDeviceUtilization',
|
|
{
|
|
'gpu' : ValueAndUnit,
|
|
'memory' : ValueAndUnit
|
|
})
|
|
ExecutionDevice = TypedDict('ExecutionDevice',
|
|
{
|
|
'driver_version' : str,
|
|
'framework' : ExecutionDeviceFramework,
|
|
'product' : ExecutionDeviceProduct,
|
|
'video_memory' : ExecutionDeviceVideoMemory,
|
|
'utilization' : ExecutionDeviceUtilization
|
|
})
|
|
|
|
AppContext = Literal['cli', 'ui']
|
|
|
|
InferencePool = Dict[str, InferenceSession]
|
|
InferencePoolSet = Dict[AppContext, Dict[str, InferencePool]]
|
|
|
|
UiWorkflow = Literal['instant_runner', 'job_runner', 'job_manager']
|
|
|
|
JobStore = TypedDict('JobStore',
|
|
{
|
|
'job_keys' : List[str],
|
|
'step_keys' : List[str]
|
|
})
|
|
JobOutputSet = Dict[str, List[str]]
|
|
JobStatus = Literal['drafted', 'queued', 'completed', 'failed']
|
|
JobStepStatus = Literal['drafted', 'queued', 'started', 'completed', 'failed']
|
|
JobStep = TypedDict('JobStep',
|
|
{
|
|
'args' : Args,
|
|
'status' : JobStepStatus
|
|
})
|
|
Job = TypedDict('Job',
|
|
{
|
|
'version' : str,
|
|
'date_created' : str,
|
|
'date_updated' : Optional[str],
|
|
'steps' : List[JobStep]
|
|
})
|
|
JobSet = Dict[str, Job]
|
|
|
|
ApplyStateItem = Callable[[Any, Any], None]
|
|
StateKey = Literal\
|
|
[
|
|
'command',
|
|
'config_path',
|
|
'jobs_path',
|
|
'source_paths',
|
|
'target_path',
|
|
'output_path',
|
|
'face_detector_model',
|
|
'face_detector_size',
|
|
'face_detector_angles',
|
|
'face_detector_score',
|
|
'face_landmarker_model',
|
|
'face_landmarker_score',
|
|
'face_selector_mode',
|
|
'face_selector_order',
|
|
'face_selector_gender',
|
|
'face_selector_race',
|
|
'face_selector_age_start',
|
|
'face_selector_age_end',
|
|
'reference_face_position',
|
|
'reference_face_distance',
|
|
'reference_frame_number',
|
|
'face_mask_types',
|
|
'face_mask_blur',
|
|
'face_mask_padding',
|
|
'face_mask_regions',
|
|
'trim_frame_start',
|
|
'trim_frame_end',
|
|
'temp_frame_format',
|
|
'keep_temp',
|
|
'output_image_quality',
|
|
'output_image_resolution',
|
|
'output_audio_encoder',
|
|
'output_video_encoder',
|
|
'output_video_preset',
|
|
'output_video_quality',
|
|
'output_video_resolution',
|
|
'output_video_fps',
|
|
'skip_audio',
|
|
'processors',
|
|
'open_browser',
|
|
'ui_layouts',
|
|
'ui_workflow',
|
|
'execution_device_id',
|
|
'execution_providers',
|
|
'execution_thread_count',
|
|
'execution_queue_count',
|
|
'video_memory_strategy',
|
|
'system_memory_limit',
|
|
'skip_download',
|
|
'log_level',
|
|
'job_id',
|
|
'job_status',
|
|
'step_index'
|
|
]
|
|
State = TypedDict('State',
|
|
{
|
|
'command' : str,
|
|
'config_path' : str,
|
|
'jobs_path' : str,
|
|
'source_paths' : List[str],
|
|
'target_path' : str,
|
|
'output_path' : str,
|
|
'face_detector_model' : FaceDetectorModel,
|
|
'face_detector_size' : str,
|
|
'face_detector_angles' : List[Angle],
|
|
'face_detector_score' : Score,
|
|
'face_landmarker_model' : FaceLandmarkerModel,
|
|
'face_landmarker_score' : Score,
|
|
'face_selector_mode' : FaceSelectorMode,
|
|
'face_selector_order' : FaceSelectorOrder,
|
|
'face_selector_race': Race,
|
|
'face_selector_gender' : Gender,
|
|
'face_selector_age_start' : int,
|
|
'face_selector_age_end' : int,
|
|
'reference_face_position' : int,
|
|
'reference_face_distance' : float,
|
|
'reference_frame_number' : int,
|
|
'face_mask_types' : List[FaceMaskType],
|
|
'face_mask_blur' : float,
|
|
'face_mask_padding' : Padding,
|
|
'face_mask_regions' : List[FaceMaskRegion],
|
|
'trim_frame_start' : int,
|
|
'trim_frame_end' : int,
|
|
'temp_frame_format' : TempFrameFormat,
|
|
'keep_temp' : bool,
|
|
'output_image_quality' : int,
|
|
'output_image_resolution' : str,
|
|
'output_audio_encoder' : OutputAudioEncoder,
|
|
'output_video_encoder' : OutputVideoEncoder,
|
|
'output_video_preset' : OutputVideoPreset,
|
|
'output_video_quality' : int,
|
|
'output_video_resolution' : str,
|
|
'output_video_fps' : float,
|
|
'skip_audio' : bool,
|
|
'processors' : List[str],
|
|
'open_browser' : bool,
|
|
'ui_layouts' : List[str],
|
|
'ui_workflow' : UiWorkflow,
|
|
'execution_device_id': str,
|
|
'execution_providers': List[ExecutionProviderKey],
|
|
'execution_thread_count': int,
|
|
'execution_queue_count': int,
|
|
'video_memory_strategy': VideoMemoryStrategy,
|
|
'system_memory_limit': int,
|
|
'skip_download': bool,
|
|
'log_level': LogLevel,
|
|
'job_id': str,
|
|
'job_status': JobStatus,
|
|
'step_index': int
|
|
})
|
|
StateSet = Dict[AppContext, State]
|