Update README.md
Browse files
README.md
CHANGED
@@ -1,3 +1,48 @@
|
|
1 |
-
---
|
2 |
-
license: cc-by-4.0
|
3 |
-
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
license: cc-by-4.0
|
3 |
+
---
|
4 |
+
|
5 |
+
# GEN3C: 3D-Informed World-Consistent Video Generation with Precise Camera Control
|
6 |
+
CVPR 2025 (Highlight)
|
7 |
+
|
8 |
+
[Xuanchi Ren*](https://xuanchiren.com/),
|
9 |
+
[Tianchang Shen*](https://www.cs.toronto.edu/~shenti11/)
|
10 |
+
[Jiahui Huang](https://huangjh-pub.github.io/),
|
11 |
+
[Huan Ling](https://www.cs.toronto.edu/~linghuan/),
|
12 |
+
[Yifan Lu](https://yifanlu0227.github.io/),
|
13 |
+
[Merlin Nimier-David](https://merlin.nimierdavid.fr/),
|
14 |
+
[Thomas Müller](https://research.nvidia.com/person/thomas-muller),
|
15 |
+
[Alexander Keller](https://research.nvidia.com/person/alex-keller),
|
16 |
+
[Sanja Fidler](https://www.cs.toronto.edu/~fidler/),
|
17 |
+
[Jun Gao](https://www.cs.toronto.edu/~jungao/) <br>
|
18 |
+
\* indicates equal contribution <br>
|
19 |
+
**[Paper](https://arxiv.org/pdf/2503.03751), [Project Page](https://research.nvidia.com/labs/toronto-ai/GEN3C/)**
|
20 |
+
|
21 |
+
Abstract: We present GEN3C, a generative video model with precise Camera Control and
|
22 |
+
temporal 3D Consistency. Prior video models already generate realistic videos,
|
23 |
+
but they tend to leverage little 3D information, leading to inconsistencies,
|
24 |
+
such as objects popping in and out of existence. Camera control, if implemented
|
25 |
+
at all, is imprecise, because camera parameters are mere inputs to the neural
|
26 |
+
network which must then infer how the video depends on the camera. In contrast,
|
27 |
+
GEN3C is guided by a 3D cache: point clouds obtained by predicting the
|
28 |
+
pixel-wise depth of seed images or previously generated frames. When generating
|
29 |
+
the next frames, GEN3C is conditioned on the 2D renderings of the 3D cache with
|
30 |
+
the new camera trajectory provided by the user. Crucially, this means that
|
31 |
+
GEN3C neither has to remember what it previously generated nor does it have to
|
32 |
+
infer the image structure from the camera pose. The model, instead, can focus
|
33 |
+
all its generative power on previously unobserved regions, as well as advancing
|
34 |
+
the scene state to the next frame. Our results demonstrate more precise camera
|
35 |
+
control than prior work, as well as state-of-the-art results in sparse-view
|
36 |
+
novel view synthesis, even in challenging settings such as driving scenes and
|
37 |
+
monocular dynamic video. Results are best viewed in videos.
|
38 |
+
|
39 |
+
## Citation
|
40 |
+
```
|
41 |
+
@inproceedings{ren2025gen3c,
|
42 |
+
title={GEN3C: 3D-Informed World-Consistent Video Generation with Precise Camera Control},
|
43 |
+
author={Ren, Xuanchi and Shen, Tianchang and Huang, Jiahui and Ling, Huan and
|
44 |
+
Lu, Yifan and Nimier-David, Merlin and Müller, Thomas and Keller, Alexander and
|
45 |
+
Fidler, Sanja and Gao, Jun},
|
46 |
+
booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
|
47 |
+
year={2025}
|
48 |
+
}
|