TACO Icon

TACO: Taming Diffusion for in-the-wild Video Amodal Completion

* Work done during an internship at BIGAI    Corresponding Author
1State Key Laboratory of General Artificial Intelligence, Peking University    
2State Key Laboratory of General Artificial Intelligence, BIGAI    3Tsinghua University

TACO Teaser

TACO can synthesize consistent amodal content for occluded objects in in-the-wild videos.

Abstract

Humans can infer complete shapes and appearances of objects from limited visual cues, relying on extensive prior knowledge of the physical world. However, completing partially observable objects while ensuring consistency across video frames remains challenging for existing models, especially for unstructured, in-the-wild videos. This paper tackles the task of Video Amodal Completion (VAC), which aims to generate the complete object consistently throughout the video given a visual prompt specifying the object of interest. Leveraging the rich, consistent manifolds learned by pre-trained video diffusion models, we propose a conditional diffusion model, TACO, that repurposes these manifolds for VAC. To enable its effective and robust generalization to challenging in-the-wild scenarios, we curate a large-scale synthetic dataset with multiple difficulty levels by systematically imposing occlusions onto un-occluded videos. Building on this, we devise a progressive fine-tuning paradigm that starts with simpler recovery tasks and gradually advances to more complex ones. We demonstrate TACO's versatility on a wide range of in-the-wild videos from Internet, as well as on diverse, unseen datasets commonly used in autonomous driving, robotic manipulation, and scene understanding. Moreover, we show that TACO can be effectively applied to various downstream tasks like object reconstruction and pose estimation, highlighting its potential to facilitate physical world understanding and reasoning.

Data Curation

We curate the Object-video-Overlay (OvO) dataset following three steps:
1) Segment out candidates using provided annotations or off-the-shelf segmentation models.
2) Check the completeness of candidates with heuristic rules and manual filtering.
3) Progressively overlay consistent occluders for getting occluded-unoccluded training pairs.
TACO Method

Method

We repurpose stable video diffusion (SVD) for Video Amodal Completion (VAC) by progressively fine-tuning on the Object-video-Overlay (OvO) dataset (First on OvO-Easy and then OvO-Hard) and incorporating the occluded object's visible masks as visual prompt to specify the object of interest.
TACO Method

Results

Comparisons on Kubric-Static and Kubric-Dynamic

TACO Results

Results on ScanNet++

Results on BridgeData

Results on YouTube-VOS

Results on YCB-Video

Results on Bdd100k

Results on in-the-wild videos

Long Videos

BibTeX


      @article{lu2025taco,
        title={TACO: Taming Diffusion for in-the-wild Video Amodal Completion},
        author={Lu, Ruijie and Chen, Yixin and Liu, Yu and Tang, Jiaxiang and Ni, Junfeng and Wan, Diwen and Zeng, Gang and Huang, Siyuan},
        journal={arXiv preprint arXiv:2503.12049},
        year={2025}
      }