Yaohui Wang, Di Yang, Xinyuan Chen, François Brémond, Yu Qiao, Antitza Dantcheva
This is the official PyTorch codebase for LIA-X. LIA-X has developped original LIA into a new level, where now it supports interpretable and fine-grained control of head, mouth and eyes.
2025.08.20: LIA-X is selected as Spaces of the Week on HuggingFace!
2025.08.13: We release paper, model and inference code!
git clone https://github.com/wyhsirius/LIA-X
cd LIA-XSetting up the environment and download pre-trained model to ./model.
conda create -n liax python=3.10
conda activate liax
pip install -r requirements.txtRun following command to launch Gradio UI locally. For online interface, visit HF Space.
python app.pyInstruction
We provide three tabs Image Animation, Image Editing and Video Edting, they all support fine-grained manipulation for head, mouth and eyes using the Control Panel.
- Image Animation
- Upload
Source ImageandDriving Video - Use
Control Panelto edit source image andEditbutton to display theEdited Source Image - Use
Animatebutton to obtainAnimated Video
- Upload
- Image Editing
- Upload
Source Image - Use
Control Panelto edit source image andEditbutton to display theEdited Source Image
- Upload
- Video Editing
- Upload
Video - Use
Control Panelto edit first frame of video andEditbutton to display theEdited Image - Use
Generatebutton to obtainEdited Video
- Upload
You can use inference.py to run the demo. Use --mode flag to choose the setting from animation, img_edit, vid_edit and manipulation. The --cfg flag indicates the path of corresponding configuration files. The --chunk_size flag indicates how many frames the model will process at each iteration, larger value brings faster inference speed. You should adjust this value based on your GPU.
Play with motion_id and motion_value in configuration file to obtain different results. The following are some examples generated by using the provided configurations.
1. Image Animation
python inference.py --mode animation --cfg 'config/animation/animation1.yaml'
python inference.py --mode animation --cfg 'config/animation/animation6.yaml'2. Video Editing
python inference.py --mode vid_edit --cfg 'config/vid_edit/demo1.yaml' # yaw
python inference.py --mode vid_edit --cfg 'config/vid_edit/demo2.yaml' # closing eyes3. Image Editing
python inference.py --mode img_edit --cfg 'config/img_edit/demo1.yaml' # yaw
python inference.py --mode img_edit --cfg 'config/img_edit/demo2.yaml' # pout
python inference.py --mode img_edit --cfg 'config/img_edit/demo3.yaml' # close eyes
python inference.py --mode img_edit --cfg 'config/img_edit/demo4.yaml' # move eyeballs4. Linear Manipulation
python inference.py --mode manipulation --cfg 'config/manipulation/demo1.yaml' # yaw
python inference.py --mode manipulation --cfg 'config/manipulation/demo2.yaml' # pitch
python inference.py --mode manipulation --cfg 'config/manipulation/demo5.yaml' # close & open eyes
python inference.py --mode manipulation --cfg 'config/manipulation/demo6.yaml' # move eyeballs5. Animating Your Own Data
- Data preperation (image & video cropping)
python utils/crop.py --mode img --data_path <YOUR_IMG_PATH> # crop image
python utils/crop.py --mode vid --data_path <YOUR_VID_PATH> # crop videoYou can use increase_scale and increase_top_scale flags to adjust bounding box scales. Results will be saved at ./data/source and ./data/driving
- Set correct
source_path,driving_pathandsave_dirin your configuration file - Play with
motion_valuein configuration and run following command. By default (motion_value=0), the source image will not be edited.
python inference.py --mode animation --cfg <YOUR_CONFIG_FILE_PATH>@article{wang2025lia,
title={LIA-X: Interpretable Latent Portrait Animator},
author={Wang, Yaohui and Yang, Di and Chen, Xinyuan and Bremond, Francois and Qiao, Yu and Dantcheva, Antitza},
journal={arXiv preprint arXiv:2508.09959},
year={2025}
}