We present an 2D image-based Virtual Try-On (VTON) system applicable to online shopping. The proposed method and system eliminate the need for the 3-D information requirement of clothes and humans, leveraging the recent advance in DNN-based image processing and computer graphics technology. In the first step, the cloth images are parsed into clothes and parts of a human body with a semantic segmentation network. And the pose or joint information of the cloth and human images are extracted using a pose estimation network. In the second step, the segmented cloth and joint information are jointly used for matching and deforming the cloth images onto a target human image. From our full implementation and preliminary trial, many successful result with from simple shaped and posed cloths and pose images demonstrates the feasibility of VTON system. However, we also observed many failed cases due to the diverse cloth and poses. We discuss the causes of failures to identify the applicable area of the current state-of-the art technology for commercial level VTON, and long term key problems for a future VTON system.
2. VTON system design
3. Costume split and posture analysis
4. Experimental Environment and Results