--- license: cc-by-4.0 --- # ๐Ÿงต 3DFDReal: 3D Fashion Data from the Real World **ETRI Media Intellectualization Research Section, ETRI** ![Teaser](figures/teaser.png) **3DFDReal** is a real-world fashion dataset tailored for 3D vision tasks such as **segmentation**, **reconstruction**, **rigging**, and **deployment in metaverse platforms**. Captured from high-resolution multi-view 2D videos (4K @ 60fps), the dataset includes both **individual fashion items** and **combined outfits** worn by mannequins. --- ## ๐Ÿ” Overview **3DFDReal** bridges the gap between high-quality 3D fashion modeling and practical deployment in virtual environments, such as **ZEPETO**. It features over **1,000 3D point clouds**, each enriched with detailed metadata including: - Class labels - Gender and pose type - Texture and semantic attributes - Structured segmentations This dataset provides a foundation for advancing research in **pose-aware 3D understanding**, **avatar modeling**, and **digital twin applications**. --- ## ๐ŸŽฅ Data Collection Pipeline The dataset is built through a structured four-stage pipeline: 1. **Asset Selection:** Fashion items (e.g., shoes, tops, accessories) are selected and tagged individually or in sets. 2. **Recording Setup:** Items or mannequins are filmed using an iPhone 13 Pro from multi-view angles for 3D reconstruction. 3. **3D Ground Truth Generation:** Videos are converted into colored point clouds and manually segmented using professional 3D labeling tools. 4. **Application & Validation:** Assets are rigged and tested in avatar environments like ZEPETO for deployment readiness. --- ## ๐Ÿ“Š Dataset Statistics ### ๐Ÿ“ˆ Class Distribution ![Used Fasion Item Count for 3D Dataset](figures/fashion_class_distribution.png) **Pants** and **sweatshirts** are used more than other fashion items. ![Fashion Item Count in Mannequin-wear Combinations](figures/Count appears in Combination.png) **Sneakers** and **Pants** are the most frequent fashion items in Mannequin-wear combinations. ### ๐Ÿ‘š Combination Metadata ![Combination Overview](figures/combination_overview_stats.png) Key observations: - Most mannequin outfits contain **four distinct fashion items**. - Gender distribution is balanced across combinations. - **T-poses** are selectively used for rigging, while **upright poses** dominate standard recordings. --- ## ๐Ÿ“ Dataset Structure ``` dataset/ โ”œโ”€โ”€ PointCloud_Asset/ โ”œโ”€โ”€ Video_Asset/ โ”œโ”€โ”€ Label_Asset/ โ”œโ”€โ”€ PointCloud_Combine/ โ”œโ”€โ”€ Video_Combine/ โ”œโ”€โ”€ Label_Combine/ โ””โ”€โ”€ meta/ โ”œโ”€โ”€ asset_meta.json โ”œโ”€โ”€ combination_meta.json โ”œโ”€โ”€ train_combination_meta.json โ”œโ”€โ”€ val_combination_meta.json โ”œโ”€โ”€ test_combination_meta.json โ””โ”€โ”€ label_map.csv ``` --- ## ๐Ÿ“ฆ Data Description ### ๐Ÿ”น Individual Asset Files - **PointCloud_Asset/** Contains raw point clouds of individual clothing or body parts in `.ply` format. - **Video_Asset/** Rendered 3D videos of individual assets showing different rotations or views. - **Label_Asset/** Label information (e.g., category, class ID) for each individual asset. --- ### ๐Ÿ”น Combined Assets (Mannequin Representations) - **PointCloud_Combine/** Combined point clouds representing mannequins wearing multiple assets. Split into `train`, `val`, and `test` sets. - **Video_Combine/** Rendered 3D videos of mannequins with asset combinations. Also split into `train`, `val`, and `test`. - **Label_Combine/** Label files corresponding to the combined point clouds and videos. --- ## ๐Ÿ—‚๏ธ Metadata Files (`meta/`) Each mata contains this detailed information: - `label_str`: class name - `gender`, `pose`, `type` - `wnlemmas`: fine-grained semantic tags - **asset_meta.json**: Metadata for individual assets - **combination_meta.json**: Metadata for all combinations - **train_combination_meta.json**, **val_combination_meta.json**, **test_combination_meta.json**: Define which combinations belong to each data split. - **label_map.csv**: Maps label for the first data acquisition fullid and the second acquisition label fullid. --- ## ๐Ÿงช Benchmarks ### 3D object segmentation The Baseline model using [**SAMPart3D**](https://yhyang-myron.github.io/SAMPart3D-website/) demonstrates high segmentation quality (mIoU: 0.9930) but shows varying average precision (AP) across classes. ![3D object segmentation with SAMPart3D](figures/seg_tuning.png) ### 3D data reconstruction The baseline models are [**DDPM-A**](https://github.com/lucidrains/denoising-diffusion-pytorch) diffusion-based probabilistic model for the generation task, and [**SVD-SVDFormer**](https://github.com/czvvd/SVDFormer_PointSea) for the completion task. Performance is measured using Chamfer Distance (CD), Density-aware Chamfer Distance (DCD), and F1-Score (F1). For DDPM, sampled point clouds are shuffled without considering the sampling ratio ๐‘›, and the performance of DDPM is measured with CD. DDPM shows 0.628ยฑ0.887 of the average CD. ![3D data re onstruction example](figures/sampledPC.png) --- ## ๐Ÿ’ป Use Cases - **Virtual try-on** - **Metaverse asset creation** - **Pose-aware segmentation** - **Avatar rigging & deformation simulation** ## ๐Ÿ“ƒ License CC-BY 4.0 --- ## ๐Ÿ“š Citation ```bibtex @misc{3DFDReal, title={3DFDReal: 3D Fashion Data from the Real World}, author={Jiyoun Lim, Jungwoo Son, Alex Lee, Sun-Joong Kim, Nam Kyung Lee, Won-Joo Park},, year={2025}, howpublished={\url{https://huggingface.co/datasets/kusses/3DFDReal}}, } ``` --- ## ๐Ÿ’ฌ Contact For questions, please reach out via [kusses@etri.re.kr] or use the Discussions tab on Hugging Face.