matanby commited on
Commit
ea4b0f3
·
verified ·
1 Parent(s): dd3f87d

Upload folder using huggingface_hub

Browse files
.gitattributes CHANGED
@@ -33,3 +33,5 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
 
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ samples/sample_0.gif filter=lfs diff=lfs merge=lfs -text
37
+ samples/sample_1.gif filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,58 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ tags:
3
+ - ltx-video
4
+ - image-to-video
5
+ pinned: true
6
+ language:
7
+ - en
8
+ license: other
9
+ pipeline_tag: text-to-video
10
+ library_name: diffusers
11
+ ---
12
+
13
+ # ltxv-13b-slime
14
+
15
+ This is a fine-tuned version of [`LTXV_13B_097_DEV`](https://huggingface.co/Lightricks/LTX-Video/blob/main/ltxv-13b-0.9.7-dev.safetensors) trained on custom data.
16
+
17
+ ## Model Details
18
+
19
+ - **Base Model:** [`LTXV_13B_097_DEV`](https://huggingface.co/Lightricks/LTX-Video/blob/main/ltxv-13b-0.9.7-dev.safetensors)
20
+ - **Training Type:** LoRA fine-tuning
21
+ - **Training Steps:** 5
22
+ - **Learning Rate:** 0.0002
23
+ - **Batch Size:** 1
24
+
25
+ ## Sample Outputs
26
+
27
+ | | | | |
28
+ |:---:|:---:|:---:|:---:|
29
+ | ![example1](./samples/sample_0.gif)<br><details style="max-width: 300px; margin: auto;"><summary>Prompt</summary>SLIME green pouring from above on a man's head</details> | ![example2](./samples/sample_1.gif)<br><details style="max-width: 300px; margin: auto;"><summary>Prompt</summary>SLIME green pouring from above on a woman's head</details> |
30
+
31
+ ## Usage
32
+
33
+ This model is designed to be used with the LTXV (Lightricks Text-to-Video) pipeline.
34
+
35
+ ### 🔌 Using Trained LoRAs in ComfyUI
36
+ In order to use the trained lora in comfy:
37
+ 1. Copy your comfyui trained LoRA weights (`comfyui..safetensors` file) to the `models/loras` folder in your ComfyUI installation.
38
+ 2. In your ComfyUI workflow:
39
+ - Add the "LTXV LoRA Selector" node to choose your LoRA file
40
+ - Connect it to the "LTXV LoRA Loader" node to apply the LoRA to your generation
41
+
42
+ You can find reference Text-to-Video (T2V) and Image-to-Video (I2V) workflows in the [official LTXV ComfyUI repository](https://github.com/Lightricks/ComfyUI-LTXVideo).
43
+
44
+ ### Example Prompts
45
+
46
+ Example prompts used during validation:
47
+
48
+ - `SLIME green pouring from above on a man's head`
49
+ - `SLIME green pouring from above on a woman's head`
50
+
51
+
52
+
53
+ This model inherits the license of the base model ([`LTXV_13B_097_DEV`](https://huggingface.co/Lightricks/LTX-Video/blob/main/ltxv-13b-0.9.7-dev.safetensors)).
54
+
55
+ ## Acknowledgments
56
+
57
+ - Base model by [Lightricks](https://huggingface.co/Lightricks)
58
+ - Training infrastructure: [LTX-Video-Trainer](https://github.com/Lightricks/ltx-video-trainer)
comfyui_lora_weights_step_00005.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8cfe7655b8d50f7a9aa4903f3c415efd1e1d6ffc57ecf3abc709f94d1a6fe095
3
+ size 805412760
lora_weights_step_00005.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:24791f6ed2f1d2ce24d227be5f8ed3779897c3de7a253b47c626bb726415554f
3
+ size 805409712
samples/sample_0.gif ADDED

Git LFS Details

  • SHA256: 6ffcf9b1449281007a28d022ebbd9f170287394ea100e8fd007abb7c52a30fe7
  • Pointer size: 133 Bytes
  • Size of remote file: 12.6 MB
samples/sample_1.gif ADDED

Git LFS Details

  • SHA256: b775c10ecc1e727d12ba13f9741609cc3e7516d62b8486c18e90c8b6c9f9d15e
  • Pointer size: 133 Bytes
  • Size of remote file: 12.7 MB