Create README.md
Browse files
README.md
ADDED
@@ -0,0 +1,70 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
base_model:
|
3 |
+
- Lightricks/LTX-Video
|
4 |
+
library_name: diffusers
|
5 |
+
---
|
6 |
+
|
7 |
+
|
8 |
+
# Suturing World Model (LTX-Video, t2v)
|
9 |
+
|
10 |
+
<p align="center">
|
11 |
+
<img src="https://github.com/mkturkcan/suturingmodels/blob/main/static/images/lora_sample.jpg?raw=true" />
|
12 |
+
</p>
|
13 |
+
|
14 |
+
|
15 |
+
This repository hosts the fine-tuned LTX-Video text-to-video (t2v) diffusion model specialized for generating realistic robotic surgical suturing videos, capturing fine-grained sub-stitch actions including needle positioning, targeting, driving, and withdrawal. The model can differentiate between ideal and non-ideal surgical techniques, making it suitable for applications in surgical training, skill evaluation, and autonomous surgical system development.
|
16 |
+
|
17 |
+
## Model Details
|
18 |
+
|
19 |
+
- **Base Model**: LTX-Video
|
20 |
+
- **Resolution**: 768×512 pixels (Adjustable)
|
21 |
+
- **Frame Length**: 49 frames per generated video (Adjustable)
|
22 |
+
- **Fine-tuning Method**: Low-Rank Adaptation (LoRA)
|
23 |
+
- **Data Source**: Annotated laparoscopic surgery exercise videos (∼2,000 clips)
|
24 |
+
|
25 |
+
## Usage Example
|
26 |
+
|
27 |
+
```python
|
28 |
+
import torch
|
29 |
+
from diffusers import LTXPipeline
|
30 |
+
from diffusers.utils import export_to_video
|
31 |
+
|
32 |
+
pipe = LTXPipeline.from_pretrained(
|
33 |
+
"Lightricks/LTX-Video", torch_dtype=torch.bfloat16
|
34 |
+
).to("cuda")
|
35 |
+
pipe.load_lora_weights("mehmetkeremturkcan/SuturingWorldModel-LTX-T2V", weight_name="pytorch_lora_weights.safetensors", adapter_name="ltxv-lora")
|
36 |
+
pipe.set_adapters(["ltxv-lora"], [1.])
|
37 |
+
|
38 |
+
for i in range(10):
|
39 |
+
video = pipe("suturingv2 A needledrivingnonideal clip, generated from a backhand task.", height=512,
|
40 |
+
width=768,
|
41 |
+
num_frames=49,
|
42 |
+
num_inference_steps=30,).frames[0]
|
43 |
+
export_to_video(video, "ltx_lora_t2v_{}.mp4".format(str(i)), fps=8)
|
44 |
+
```
|
45 |
+
|
46 |
+
## Applications
|
47 |
+
- **Surgical Training**: Generate demonstrations of both ideal and non-ideal surgical techniques for training purposes.
|
48 |
+
- **Skill Evaluation**: Assess surgical skills by comparing actual procedures against model-generated standards.
|
49 |
+
- **Robotic Automation**: Inform autonomous surgical robotic systems for real-time guidance and procedure automation.
|
50 |
+
|
51 |
+
## Quantitative Performance
|
52 |
+
| Metric | Performance |
|
53 |
+
|-------------------------|---------------|
|
54 |
+
| L2 Reconstruction Loss | 0.32576 |
|
55 |
+
| Inference Time | ~6.1 seconds per video |
|
56 |
+
|
57 |
+
## Future Directions
|
58 |
+
Further improvements will focus on increasing model robustness, expanding the dataset diversity, and enhancing real-time applicability to robotic surgical scenarios.
|
59 |
+
|
60 |
+
## Citation
|
61 |
+
Please cite our work if you find this model useful:
|
62 |
+
|
63 |
+
```bibtex
|
64 |
+
@article{turkcan2024suturing,
|
65 |
+
title={Towards Suturing World Models: Learning Predictive Models for Robotic Surgical Tasks},
|
66 |
+
author={Turkcan, Mehmet Kerem and Ballo, Mattia and Filicori, Filippo and Kostic, Zoran},
|
67 |
+
journal={arXiv preprint arXiv:2024},
|
68 |
+
year={2024}
|
69 |
+
}
|
70 |
+
```
|