-
LucidDreamer: Domain-free Generation of 3D Gaussian Splatting Scenes
Paper • 2311.13384 • Published • 53 -
Disentangled 3D Scene Generation with Layout Learning
Paper • 2402.16936 • Published • 12 -
WonderWorld: Interactive 3D Scene Generation from a Single Image
Paper • 2406.09394 • Published • 3 -
VideoScene: Distilling Video Diffusion Model to Generate 3D Scenes in One Step
Paper • 2504.01956 • Published • 41
Collections
Discover the best community collections!
Collections including paper arxiv:2404.07199
-
PointInfinity: Resolution-Invariant Point Diffusion Models
Paper • 2404.03566 • Published • 16 -
RealmDreamer: Text-Driven 3D Scene Generation with Inpainting and Depth Diffusion
Paper • 2404.07199 • Published • 28 -
Revising Densification in Gaussian Splatting
Paper • 2404.06109 • Published • 9 -
Magic-Boost: Boost 3D Generation with Mutli-View Conditioned Diffusion
Paper • 2404.06429 • Published • 7
-
Faster Diffusion: Rethinking the Role of UNet Encoder in Diffusion Models
Paper • 2312.09608 • Published • 16 -
CodeFusion: A Pre-trained Diffusion Model for Code Generation
Paper • 2310.17680 • Published • 73 -
ZeroNVS: Zero-Shot 360-Degree View Synthesis from a Single Real Image
Paper • 2310.17994 • Published • 8 -
Progressive Knowledge Distillation Of Stable Diffusion XL Using Layer Level Loss
Paper • 2401.02677 • Published • 24
-
One-for-All: Generalized LoRA for Parameter-Efficient Fine-tuning
Paper • 2306.07967 • Published • 24 -
Rerender A Video: Zero-Shot Text-Guided Video-to-Video Translation
Paper • 2306.07954 • Published • 111 -
TryOnDiffusion: A Tale of Two UNets
Paper • 2306.08276 • Published • 74 -
Seeing the World through Your Eyes
Paper • 2306.09348 • Published • 33
-
No "Zero-Shot" Without Exponential Data: Pretraining Concept Frequency Determines Multimodal Model Performance
Paper • 2404.04125 • Published • 30 -
CoMat: Aligning Text-to-Image Diffusion Model with Image-to-Text Concept Matching
Paper • 2404.03653 • Published • 37 -
Cross-Attention Makes Inference Cumbersome in Text-to-Image Diffusion Models
Paper • 2404.02747 • Published • 13 -
3D Congealing: 3D-Aware Image Alignment in the Wild
Paper • 2404.02125 • Published • 10
-
SHINOBI: Shape and Illumination using Neural Object Decomposition via BRDF Optimization In-the-wild
Paper • 2401.10171 • Published • 14 -
Sketch2NeRF: Multi-view Sketch-guided Text-to-3D Generation
Paper • 2401.14257 • Published • 12 -
pix2gestalt: Amodal Segmentation by Synthesizing Wholes
Paper • 2401.14398 • Published • 10 -
AGG: Amortized Generative 3D Gaussians for Single Image to 3D
Paper • 2401.04099 • Published • 9
-
HiFi4G: High-Fidelity Human Performance Rendering via Compact Gaussian Splatting
Paper • 2312.03461 • Published • 17 -
COLMAP-Free 3D Gaussian Splatting
Paper • 2312.07504 • Published • 15 -
Align Your Gaussians: Text-to-4D with Dynamic 3D Gaussians and Composed Diffusion Models
Paper • 2312.13763 • Published • 11 -
AGG: Amortized Generative 3D Gaussians for Single Image to 3D
Paper • 2401.04099 • Published • 9
-
LucidDreamer: Domain-free Generation of 3D Gaussian Splatting Scenes
Paper • 2311.13384 • Published • 53 -
Disentangled 3D Scene Generation with Layout Learning
Paper • 2402.16936 • Published • 12 -
WonderWorld: Interactive 3D Scene Generation from a Single Image
Paper • 2406.09394 • Published • 3 -
VideoScene: Distilling Video Diffusion Model to Generate 3D Scenes in One Step
Paper • 2504.01956 • Published • 41
-
No "Zero-Shot" Without Exponential Data: Pretraining Concept Frequency Determines Multimodal Model Performance
Paper • 2404.04125 • Published • 30 -
CoMat: Aligning Text-to-Image Diffusion Model with Image-to-Text Concept Matching
Paper • 2404.03653 • Published • 37 -
Cross-Attention Makes Inference Cumbersome in Text-to-Image Diffusion Models
Paper • 2404.02747 • Published • 13 -
3D Congealing: 3D-Aware Image Alignment in the Wild
Paper • 2404.02125 • Published • 10
-
PointInfinity: Resolution-Invariant Point Diffusion Models
Paper • 2404.03566 • Published • 16 -
RealmDreamer: Text-Driven 3D Scene Generation with Inpainting and Depth Diffusion
Paper • 2404.07199 • Published • 28 -
Revising Densification in Gaussian Splatting
Paper • 2404.06109 • Published • 9 -
Magic-Boost: Boost 3D Generation with Mutli-View Conditioned Diffusion
Paper • 2404.06429 • Published • 7
-
SHINOBI: Shape and Illumination using Neural Object Decomposition via BRDF Optimization In-the-wild
Paper • 2401.10171 • Published • 14 -
Sketch2NeRF: Multi-view Sketch-guided Text-to-3D Generation
Paper • 2401.14257 • Published • 12 -
pix2gestalt: Amodal Segmentation by Synthesizing Wholes
Paper • 2401.14398 • Published • 10 -
AGG: Amortized Generative 3D Gaussians for Single Image to 3D
Paper • 2401.04099 • Published • 9
-
Faster Diffusion: Rethinking the Role of UNet Encoder in Diffusion Models
Paper • 2312.09608 • Published • 16 -
CodeFusion: A Pre-trained Diffusion Model for Code Generation
Paper • 2310.17680 • Published • 73 -
ZeroNVS: Zero-Shot 360-Degree View Synthesis from a Single Real Image
Paper • 2310.17994 • Published • 8 -
Progressive Knowledge Distillation Of Stable Diffusion XL Using Layer Level Loss
Paper • 2401.02677 • Published • 24
-
HiFi4G: High-Fidelity Human Performance Rendering via Compact Gaussian Splatting
Paper • 2312.03461 • Published • 17 -
COLMAP-Free 3D Gaussian Splatting
Paper • 2312.07504 • Published • 15 -
Align Your Gaussians: Text-to-4D with Dynamic 3D Gaussians and Composed Diffusion Models
Paper • 2312.13763 • Published • 11 -
AGG: Amortized Generative 3D Gaussians for Single Image to 3D
Paper • 2401.04099 • Published • 9
-
One-for-All: Generalized LoRA for Parameter-Efficient Fine-tuning
Paper • 2306.07967 • Published • 24 -
Rerender A Video: Zero-Shot Text-Guided Video-to-Video Translation
Paper • 2306.07954 • Published • 111 -
TryOnDiffusion: A Tale of Two UNets
Paper • 2306.08276 • Published • 74 -
Seeing the World through Your Eyes
Paper • 2306.09348 • Published • 33