--- tags: - ltx-video - image-to-video pinned: true language: - en license: other pipeline_tag: text-to-video library_name: diffusers --- # ltxv-13b-slime This is a fine-tuned version of [`LTXV_13B_097_DEV`](https://huggingface.co/Lightricks/LTX-Video/blob/main/ltxv-13b-0.9.7-dev.safetensors) trained on custom data. ## Model Details - **Base Model:** [`LTXV_13B_097_DEV`](https://huggingface.co/Lightricks/LTX-Video/blob/main/ltxv-13b-0.9.7-dev.safetensors) - **Training Type:** LoRA fine-tuning - **Training Steps:** 5 - **Learning Rate:** 0.0002 - **Batch Size:** 1 ## Sample Outputs | | | | | |:---:|:---:|:---:|:---:| | ![example1](./samples/sample_0.gif)
PromptSLIME green pouring from above on a man's head
| ![example2](./samples/sample_1.gif)
PromptSLIME green pouring from above on a woman's head
| ## Usage This model is designed to be used with the LTXV (Lightricks Text-to-Video) pipeline. ### 🔌 Using Trained LoRAs in ComfyUI In order to use the trained lora in comfy: 1. Copy your comfyui trained LoRA weights (`comfyui..safetensors` file) to the `models/loras` folder in your ComfyUI installation. 2. In your ComfyUI workflow: - Add the "LTXV LoRA Selector" node to choose your LoRA file - Connect it to the "LTXV LoRA Loader" node to apply the LoRA to your generation You can find reference Text-to-Video (T2V) and Image-to-Video (I2V) workflows in the [official LTXV ComfyUI repository](https://github.com/Lightricks/ComfyUI-LTXVideo). ### Example Prompts Example prompts used during validation: - `SLIME green pouring from above on a man's head` - `SLIME green pouring from above on a woman's head` This model inherits the license of the base model ([`LTXV_13B_097_DEV`](https://huggingface.co/Lightricks/LTX-Video/blob/main/ltxv-13b-0.9.7-dev.safetensors)). ## Acknowledgments - Base model by [Lightricks](https://huggingface.co/Lightricks) - Training infrastructure: [LTX-Video-Trainer](https://github.com/Lightricks/ltx-video-trainer)