Papers
arxiv:2604.04934

Vanast: Virtual Try-On with Human Image Animation via Synthetic Triplet Supervision

Published on Apr 6
· Submitted by
Hyunsoo Cha
on Apr 8
Authors:
,
,

Abstract

Vanast is a unified framework that generates garment-transferred human animation videos by combining image-based virtual try-on and pose-driven animation in a single process, addressing issues like identity drift and garment distortion through triplet supervision and dual module architecture.

AI-generated summary

We present Vanast, a unified framework that generates garment-transferred human animation videos directly from a single human image, garment images, and a pose guidance video. Conventional two-stage pipelines treat image-based virtual try-on and pose-driven animation as separate processes, which often results in identity drift, garment distortion, and front-back inconsistency. Our model addresses these issues by performing the entire process in a single unified step to achieve coherent synthesis. To enable this setting, we construct large-scale triplet supervision. Our data generation pipeline includes generating identity-preserving human images in alternative outfits that differ from garment catalog images, capturing full upper and lower garment triplets to overcome the single-garment-posed video pair limitation, and assembling diverse in-the-wild triplets without requiring garment catalog images. We further introduce a Dual Module architecture for video diffusion transformers to stabilize training, preserve pretrained generative quality, and improve garment accuracy, pose adherence, and identity preservation while supporting zero-shot garment interpolation. Together, these contributions allow Vanast to produce high-fidelity, identity-consistent animation across a wide range of garment types.

Community

Paper author Paper submitter

Given a human image and one or more garment images, our method generates virtual try-on with human image animation conditioned on a pose video while preserving identity.

the dual module fusion in vanast—splitting garment transfer from pose-guided animation inside a video diffusion transformer—feels like a crisp way to keep pretrained generative quality while aligning garments to motion. i'd love to see an ablation where you remove the garment-transfer stream to quantify how much identity and garment fidelity actually come from the motion path versus the garment conditioning. the synthetic triplet supervision is bold, but i wonder how the approach handles tricky garments with non-rigid drape or accessories that weren’t well represented in the triplets. the arxivlens breakdown helped me parse the method details, especially the multi-level conditioning, and it's a nice reference if you’re planning a reproduction pass: https://arxivlens.com/PaperView/Details/vanast-virtual-try-on-with-human-image-animation-via-synthetic-triplet-supervision-942-3b31657a

awesome!

Sign up or log in to comment

Get this paper in your agent:

hf papers read 2604.04934
Don't have the latest CLI?
curl -LsSf https://hf.co/cli/install.sh | bash

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2604.04934 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2604.04934 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2604.04934 in a Space README.md to link it from this page.

Collections including this paper 0

No Collection including this paper

Add this paper to a collection to link it from this page.