◇ ─── ◈ ─── ◇
✧ · ꕤ · ✧
This repository collects open-source research from DAMO Academy, Alibaba Group in customized video generation, currently including works to customize identities/attributes and lighting for videos. Code is organized into self-contained subprojects for separate setup and reproduction.
[2025/9/19] Accepted by NeurIPS 2025 !
[2025/10/29] Code is available now!
[2026/1/26] Accepted by ICLR 2026 !
[2026/3/21] Code is available now!
If you are interested in our foundational video generation research, please refer to the Lumos project.
| Project | Venue | In one sentence | Code & docs |
|---|---|---|---|
| LumosX | ICLR 2026 | LumosX advances personalized multi-subject video generation through relational data design and relational attention modeling. | LumosX/ · README |
| UniLumos | NeurIPS 2025 | UniLumos advances unified image and video relighting through RGB-space geometry feedback on a flow-matching backbone. | UniLumos/ · README |
✦ ICLR 2026 ✦
LumosX: Relate Any Identities with Their Attributes for Personalized Video Generation
Identity-consistent · Subject-consistent personalized generation
✧ Identity consistency
Reference
|
Result
|
✧ Subject consistency
Reference
|
Result
|
➜ Reference: LumosX/asserts/images/ · Result GIFs: LumosX/asserts/videos/ · more in LumosX/README.md
- Venue: ICLR 2026
- Summary: We propose LumosX, a framework that advances both data and model design for personalized video generation. The data pipeline builds relational structure from captions and MLLM-derived priors; the model uses Relational Self-Attention and Relational Cross-Attention to encode subject–attribute dependencies. Companion evaluation resources live under
LumosX/benchmark/.
Quick links
- Model weights: Hugging Face · LumosX
- Documentation: LumosX/README.md — installation, checkpoints, inference, and benchmark evaluation
✦ NeurIPS 2025 ✦
UniLumos: Fast and Unified Image and Video Relighting with Physics-Plausible Feedback
Unified image & video relighting · physics-plausible feedback
![]() |
![]() |
![]() |
![]() |
➜ Assets live under UniLumos/assets/ (same as UniLumos/README.md) · add the GIFs locally if the folder is empty
- Venue: NeurIPS 2025
- Summary: We propose UniLumos, a unified relighting framework for images and videos. Supervision uses depth and normal maps from model outputs to align lighting with scene geometry; path consistency learning keeps this effective under few-step training. Companion evaluation is provided by LumosBench (see
UniLumos/LumosBench/).
Quick links
- Model weights: Hugging Face · UniLumos
- Documentation: UniLumos/README.md — installation, checkpoints, inference, and LumosBench evaluation
Lumos-Custom/
├── README.md # This file: umbrella overview
├── LumosX/ # ICLR 2026 · personalized multi-subject video generation
│ └── README.md
└── UniLumos/ # NeurIPS 2025 · unified relighting + LumosBench/
├── README.md
└── LumosBench/
git clone https://github.com/alibaba-damo-academy/Lumos-Custom.git
cd Lumos-Custom
# LumosX
cd LumosX
# Follow LumosX/README.md
# or UniLumos
cd ../UniLumos
# Follow UniLumos/README.mdIf you use either project, please cite the corresponding paper. BibTeX entries are in the Citation section of each subproject README.md.
@inproceedings{UniLumos,
title={UniLumos: Fast and Unified Image and Video Relighting with Physics-Plausible Feedback},
author={Liu, Pengwei and Yuan, Hangjie and Dong, Bo and Xing, Jiazheng and Wang, Jinwang and Zhao, Rui and Chen, Weihua and Wang, Fan},
booktitle={The Thirty-ninth Annual Conference on Neural Information Processing Systems}
}
@inproceedings{LumosX,
title={LumosX: Relate Any Identities with Their Attributes for Personalized Video Generation},
author={Xing, Jiazheng and Du, Fei and Yuan, Hangjie and Liu, Pengwei and Xu, Hongbin and Ci, Hai and Niu, Ruigang and Chen, Weihua and Wang, Fan and Liu, Yong},
booktitle={The Fourteenth International Conference on Learning Representations}
}- Foundational video generation: Lumos.







