--- base_model: - Lightricks/LTX-Video library_name: diffusers ---

# Towards Suturing World Models (LTX-Video, t2v)

This repository hosts the fine-tuned LTX-Video text-to-video (t2v) diffusion model specialized for generating realistic robotic surgical suturing videos, capturing fine-grained sub-stitch actions including needle positioning, targeting, driving, and withdrawal. The model can differentiate between ideal and non-ideal surgical techniques, making it suitable for applications in surgical training, skill evaluation, and autonomous surgical system development. ## Model Details - **Base Model**: LTX-Video - **Resolution**: 768×512 pixels (Adjustable) - **Frame Length**: 49 frames per generated video (Adjustable) - **Fine-tuning Method**: Low-Rank Adaptation (LoRA) - **Data Source**: Annotated laparoscopic surgery exercise videos (∼2,000 clips) ## Usage Example ```python import torch from diffusers import LTXPipeline from diffusers.utils import export_to_video pipe = LTXPipeline.from_pretrained( "Lightricks/LTX-Video", torch_dtype=torch.bfloat16 ).to("cuda") pipe.load_lora_weights("mehmetkeremturkcan/Suturing-LTX-T2V", weight_name="pytorch_lora_weights.safetensors", adapter_name="ltxv-lora") pipe.set_adapters(["ltxv-lora"], [1.]) for i in range(10): video = pipe("suturingv2 A needledrivingnonideal clip, generated from a backhand task.", height=512, width=768, num_frames=49, num_inference_steps=30,).frames[0] export_to_video(video, "ltx_lora_t2v_{}.mp4".format(str(i)), fps=8) ``` ## Applications - **Surgical Training**: Generate demonstrations of both ideal and non-ideal surgical techniques for training purposes. - **Skill Evaluation**: Assess surgical skills by comparing actual procedures against model-generated standards. - **Robotic Automation**: Inform autonomous surgical robotic systems for real-time guidance and procedure automation. ## Quantitative Performance | Metric | Performance | |-------------------------|---------------| | L2 Reconstruction Loss | 0.32576 | | Inference Time | ~6.1 seconds per video | ## Future Directions Further improvements will focus on increasing model robustness, expanding the dataset diversity, and enhancing real-time applicability to robotic surgical scenarios.