--- license: apache-2.0 language: - en base_model: - Wan-AI/Wan2.1-T2V-14B pipeline_tag: text-to-video tags: - text-to-video - text-to-image - lora - diffusers - template:diffusion-lora widget: - text: >- B0x13ng Boxing video two boxers are in the ring, but one of them is significantly shorter than the other. The shorter female fighter in black shorts is aggressively throwing punches, trying to reach his taller opponent. The taller fighter, wearing black shorts, simply holds out his glove on the shorter fighter’s forehead, keeping her at a frustrating distance. output: url: example_videos/boxing1.mp4 - text: >- B0x13ng Boxing Video The boxer throws a left jab and then a right hook, landing a knockout blow. output: url: example_videos/boxing2.mp4 - text: >- B0x13ng Boxing Video The boxer throws a quick series of jabs and then a right cross. output: url: example_videos/boxing3.mp4 ---
This LoRA is trained on the Wan2.1 14B T2V model and allows you to generate videos of people boxing.
The key trigger phrase is: B0x13ng Boxing Video
For prompting, check out the example prompts; this way of prompting seems to work very well.
This LoRA works with a modified version of Kijai's Wan Video Wrapper workflow. The main modification is adding a Wan LoRA node connected to the base model.
See the Downloads section above for the modified workflow.
The model weights are available in Safetensors format. See the Downloads section above.
Training was done using Diffusion Pipe for Training
Special thanks to Kijai for the ComfyUI Wan Video Wrapper and tdrussell for the training scripts!